07 January 2005

XML Interface: Providing XML feed warning information to those without access to the information technology

Definition:XML interface

Method of providing EDI-XML information to non-connected users.


Reader Update: here

Robert X. Cringley, Danny Ayers, and Bob Wyman assume XML will work for Tsunami Warnings. Excellent. Let's not forget those who have no access to XML, feeds, or the internet.

Overview

I recently blogged about building off Griffin's blog to create an XML table. The XML table would be a platform to integrate various XML-users into a single system. All the participants at the XML table would have two-way statusing: Being able to receive and provide inputs to all nodes connected to the XML table.

Purpose

This blog outlines expands on this concept in identifying specific data that could be used, transmitted along this table; and then addresses the question of, "How to provide this information to participants who have no access to the table."

Application: Intended Use and relevancy to real world

At this juncture, my primary objective in outlining these concepts is to provide additional inputs on the developing XML-related technology that could be used in disaster preparation and warning. Specifically, I hope to provide some thoughts from the "end user's perspective" on what XML-related products might be useful in both preparing for and recovering from a disaster, such as a Tsunami

Problem: Inability to notify non-connected clients to information otherwise available electronically

The problem with the current system is that those most likely to need assistance are least likely to be connected to the electronic information.

Clearly, there are several non-electronic methods available. The challenge is in figuring out which methods would still be reliable in the absence of electricity or connection to the web; and at the same time remain reliable immediately prior to, during, and after the disaster.

For example, one could suggest that a way to interface between the internet and non-connected users would be to use a system of bells, music, sounds, and visual cuing devices like colored smoke.

One flaw with non-electronic methods is that they may fail immediately prior to the disaster. For example, one novel approach to send XML-EDI information to non-connected users would be to use homing pigeons to notify distant provinces of a critical event.

However, this presupposes that the disaster-pre-event conditions will not impact the pigeon. There have been numerous reports that there were "no animals" in the Tsunami's wake, suggesting that animals were affected or "knew in advance" that there was an imminent problem.

The immediate issue, with respect to using pigeons to transmit XML-EDI data to non-connected users is that the pigeons may very well get signals of an impending disaster and flee, never arriving at their intended destination.

This is not to say that this is necessarily a problem. Because the flip-side of this "possible non-arrival" is that the non-arrival could in itself be a significant signal.

Specifically, if future research into animal behavior just prior to a disaster could be further studied, the change in behavior could be the necessary signal of a problem.

Regular timing of information

If we use the analogy of pings and bit-torrents, we might arrive at another perspective. Specifically, when the XML systems ping each other, they are merely statusing "I'm here, this is the change."

If a "non-electronic warning system" was one where the pigeons were regularly dispatched at pre-determined times, with known flight times, then a change in the arrival time, or the failure to show/appear could in itself be important information, so long as the baseline arrivals were reliable enough.

This is to say that the baseline information of the XML interface system could be a regular dispatch of homing pigeons with small status messages. As soon as their arrival times change, or they fail to show, the participants would know to look into the matter deeply.

Clearly, this requires research as to whether the "non-arrival of a pigeon, or number of pigeons over time relative to a baseline arrival time" would be something that could be reliably used to predict earthquake or Tsunami conditions.

XML Table

One idea behind the XML table is that the participants would have two-way statusing. Currently, the XML-EDI concept is focused mostly on one-way communication, providing content and information to the other end of the pipeline.

However, under an XML-table concept, the participants would be able to communicate both ways. No longer would a feed-reader be an idle consumer, but there would be a mechanism to provide direct feedback to the XML-content provider.

With respect to Tsunamis, the concept is this: An XML-reader would not simply review status messages. The XML-reader would also be able to provide feedback information to the central table on local conditions.

Think of an earthquake monitoring system just after an earthquake. One method of evaluating the scope of the earthquake-impact using non-Richter information is to have various people self-report the damage: Books falling off shelves, broken dishes, windows cracked, cars moved, building foundations changed.

These aggregate verbal-text-status messages can be roughly related to a Richter-reading. Again, the precision does not appear to be high.

However, there is a mapping system used in law enforcement whereby specific crime scenes are mapped. Over time, similar crimes are consolidated to a single image. Using linear programming and statistics, there are proven methods to pinpoint both the time, route, likely origin, and possible next crime scenes.

There are cases whereby bank-robberies were analyzed and law enforcement using this mapping technique were able to predict where the next robbery would occur, the time, and were waiting for the bank robber. Ironically, the robber failed to show up in their own car, used a taxi, and law enforcement was able to follow them to make the arrest.

Also, this mapping system can also be used to pinpoint the likely home address of the criminal. This is to say that criminals tend to act in predictable ways, traveling certain distances, and making certain decisions about how far they will travel from their home. They have to move far enough from where they will be recognized, yet still within moving distance.

These types of decisions affect the "drunken sailor's random walk" and help identify specific target, travel, time, and distance information to both provide predictive information and then trace back to the origin.

Behavior and event mapping

Let's apply this statistical mapping theory used in law enforcement to that of earthquakes. Going back to our anecdotal reporting of earthquake-effects, we could apply the law enforcement criminal mapping system to the earthquake reports.

This is to say, that although an individual report may in itself not say much, over time and with many reports we get an aggregate picture of the entire earthquake or disaster area. Also, using the law enforcement modeling approach with enough damage reports we should be able to pinpoint the center of the earthquake, even if it is offshore.

In other words, even though the individual nodes and people along the coast of Indonesia may not have any idea where the earthquake was [that they may not have known], they might have been able to self-report various earth tremors.

If those reports can be then aggregated into a single map, the warning system will have another piece of information.

Again, this is not intended to duplicate more robust systems that currently work in Hawaii at the Tsunami warning system.

Rather, this approach is designed to complement the existing systems and be used in situations where there are no primary systems.

Again, this simply illustrates that XML-content readers can provide valuable information to the XML table. The aggregate information is important in predicting behavior.

Other analogies

Other similar approaches include free-traffic reports. Drivers in their cars can use a free cell-code to provide reports of traffic accidents. Also, pilots and aircraft commanders have something called PiReps, which are Pilot Reports of conditions related to weather or other in-flight information.

This is simply to suggest that current warning systems already do what we might wish to be done in the future: Free information is provided from a node to a central collection point.

At the same time, the issue becomes then disseminating this aggregated information back to the individual nodes. Currently we have XML aggregators which do this nicely.

However, in the situation of a Tsunami, the challenge becomes one of providing the information to the flight crews so that they can launch their colored-smoke flares and provide a larger warning.

I have a solution. Dave Winer has reportedly been working with iPods to allow Satellite-transmission of audio information. Pilots in aircraft have commercial frequencies in their radios.

The idea is this: What if the XML-tables were to have a special emergency channels through the satellite-iPod system with a direct XML-feed both audibly and in text to the C-130 aircraft commanders.

Although an individual node may not have any information, the aggregate information delivered via iPod Satellite could provide timely information to both launch the aircraft, or set-off real-time smoke signals from the C-130 platform.

For those of you who are lost, let me say it another way.


Overal Concept


The idea is to allow individual reports on the ground self-report earthquake information-anecdotes to a central location; this information is aggregated; and the law enforcement statistical analysis approach is used to pinpoint the earthquake origin, predict future events; and then this aggregate information is provided through iPod-Satellite to aircraft commanders who will launch colored smoke flares along the coast to warn those who have no direct access to EDI-XML.

The smoke would then signal to the pigeon-owners to do something different; if the homing pigeons are regularly dispatched prior to the event, then their non-arrival will also provide a second data-point for those not directly connected to XML-EDI.

In theory the satellite-notification system could be directly linked to regional and internal warning centers such as CNN or the BBC or even the American NORAD system. The system could provide valuable input information, and also provide a means to back-channel information.


Clearly, the above is speculative. But it is intended to provide a complementary system when all primary communication systems fail, and the goal is to disseminate emergency information to those not connected to the internet and are not located within proximity to the existing information systems.

Building a system which can do this

Clearly, the next step is to construct such a system. Again, this is not intended to be the primary organizing method. Rather, it is intended to be a back-up system should the primary communication systems fail, and conditions are so bad that even amateur radio operators are unable to provide information.

Griffin's blog in India is a baseline of information on "what types of information need to be transmitted." The information in itself is currently valuable. But researchers, XML-EDI developers, and recovery planners need to also ensure they dedicate resources to monitor summary reports of the types of information available, what information suddenly appears out of the blue, and continue to refine those mechanisms that will ensure there are systems in place to manage this information, and adequately time the requirements when the system is most capable of responding.

This is another way of saying that Griffin's Blog begins to formalize the data-sets which need to be incorporated into future disaster recovery efforts. Specifically, what is needed is a method to ensure the priorities and critical requirements are prioritized; and at the same time there is a system in place to dovetail the known requirements with the incoming funding; and also aggregate these trouble reports and provide a wide prediction on the types of resources, needs, and future requirements that are required at the aggregate level.

In short, we propose that Griffin's Blog be used as a source of information for constructing the system, stress testing the variables and responses, and then ensuring that the final system can adequately manage the scope of conditions reported in Griffin's Blog.

Also, what is needed is an effort to ensure that the baseline data in Griffin's blog is aggregated to predict future trends; identify unfolding requirements.

This is another way of saying "now we know what the problem is", we now have a situation where the XML program management tools can be used. In other words, as disaster-related information arrives, XML management tools can time the inflow of this information so that the most urgent needs are given greatest visibility; and at the same time, the less urgent, but still important requirements are timed and delayed yet still planned for.

In other words, we use the incoming data to drive the management system. This is once of the speculated concepts behind XML feed integration. We no longer rely on set models and constructs to evaluate the incoming data; rather, we allow the incoming data to act as a check on whether the current model we are using to evaluate this aggregate information is working or needs to be revised.

In short, although a single data point may mean "nothing," if we allow the single data point to challenge the assumptions of the existing XML-data system, we might have a realistic basis to say, "This one data-input, although isolated is telling us something important that our current assumptions would otherwise explain away."

This was the problem at Pearl Harbor on 1941. The single data points were not allowed to robustly challenge the overall model and planning.

This new concept would turn these assumptions upside down. This is not to say that we respond at every slight indication; rather, we develop a system that gives greater visibility and weight to those indicators of impending problems, and puts a greater emphasis on these small indicators so they are given priority in the communication system along the XML pipeline.

Again, the aggregate picture will tell us what is going on at the aggregate level. Yet, an isolated node from a remote villager may be the needed warning that things are started to rumble far off coats, well before the system in Hawaii can analyze the earthquake and then provide a warning to the isolated fisherman on a boat far off sea.

The new system would essentially act as a back-up in taking these isolated reports, aggregating them, and then quickly providing the summary information to those who can most quickly send warning signals using smoke flares, chimes, music, speakers, and even homing pigeons.

Major efforts

Moving from Griffin's Blog data to the final concept

The following are the major activities needed to accomplish this effort

  • Monitor data/types in Griffin's blog
  • Proof of concept
  • Establish the connections within the nodes
  • Identify requirements
  • Coding EDI-XML and testing
  • Acquiring the hardware
  • Conducting the integration testing between XML and non-connected clients
  • Real life testing
  • During a simulated disaster who that the system can match the emerging, forecasted, and predicted requirements with incoming assistance, funding, and attention
  • Demonstrate that XML management tools can adequately prioritize needs and allocate revenues to needs, all the while timing the funding to meet long term recovery needs


  • Building the XML table

    The XML table is to allow individual nodes to provide input status messages on anecdotal events like earth tremors, books falling off shelves, and strange noises. This system would be free SMS-cell inputs.

    The XML table is a program management tool.


    XML Table

  • Two-way statusing of table and individual units [Bit Torrent]
  • Single-snapshot and picture of the entire area of interest, whether it be an XML pipeline or a geographic snapshot like LandScan.
  • Program management and calendarizing: Timing feeds, XML-EDI information to takes and events as the system is ready to respond.
  • XML table would strip out relevant information from the XML feed with an integrator; participants would use the aggregate information as desired for training, simulation, forecasting, and preparation.


  • Let's consider how the individual participants might use the information. If we look at a picture of the Indian Ocean, we can use this as an analogy for the table. All the people along the coast line and going in-land become "nodes at the XML table."

    All the participants have a chance to provide input, access to information and integrate the XML-EDI feeds as they desire; use the information for training simulations, requirements-resource matching; and statusing the system.

    Goal

    Going from Griffin's blog to the final concept means cataloging the type of information in Griffin's blog, and creating XML integration systems that can strip out the valuable information; and preparing a set of requirements on the type of XML table needed and ready to take similar inputs in a future disaster preparation and recovery effort.

    Disaster support

    Also need to make sure that recovery services are involved. Make sure they understand XML-EDI; and also that XML-EDI requirements in this system are calibrated to solve the end-client requirements at the time of the disaster.

    Analogy

    A central management tool like Drupal could be used to manage this program. Also, one analogy for "reporting the reliability of the character" would be Charity Navigator which would identify the various factors and indicators of the a various systems and approaches and ensure the XML developers knew what they were working with; and had a good understanding of the reporting reliability of the various components.

    What is different are the two-way statusing of the nodes or two-way communication along what is currently only a one-way XML-pipeline.

    Feed integration in disaster preparation and recovery

    The issue isn't simply looking at snapshots of information. But to compare along the feed pipeline various factors.


    Sample Factors of XML Pipeline and Table

  • how the data changed and why;
  • depicting the information in a summary way
  • Comparing the current summary presentation with other feeds, reports, summaries, and predictions on trends, origins, and likely specific next events
  • Drawing inferences about the reliability and credibility of the feed, not just the data but in terms of a stable, reliable source of information. For example, being consistently wrong and unreliable is sometimes more valuable information than being inconsistently reliable.


  • Testing

    For purposes of feed management, integration, and decision one may choose to ignore the rules and simply test the responsiveness of the system.


    Sample Responsiveness Criteria

  • Adequacy of response
  • Connection channels
  • Feedback
  • Effectiveness in situation-A vs what claim to be impossible in situation-B
  • Suitability of regulatory system with respect to standards, conduct, and oversight


  • Sample uses of XML Table platform data

    The next few paragraphs are some ideas on what might be some sample "day to day uses" of the aggregated-integrated XML table data. This is merely to discuss specific examples where policy makers may ask, "Give me some examples where this could be used all the time, not just in an emergency."

    This is another way of saying that the system, for it to be reliable in a disaster, needs to show itself to be reliable for every day use, and is a system that people are familiar with, trust, and regularly use. There's no sense showing up just prior to a disaster with a new technology that the clients are unfamiliar with.

    Feed integration for personal decisions

    Sample decisions individual participants could make using XML Table

  • Environmental risks
  • Forecasts on interacting with other players
  • Predictive information on time to acquire support
  • Trends on who is hiring labor to engage in relief work
  • Indicators on risks when interacting with various nodes
  • Adequacy of plans and preparatory requirements to interact with these realities
  • Developing personal plans to prepare and integrate these decisions into their day-to-day lives.


  • Sample of types of information that individuals may need

  • Baseline data on the reliability of the system
  • Anecdotes of tests and current system capabilities and flaws
  • Responsiveness of colleagues and peers to the current capabilities and likely requirements
  • Cost trends of replacement products for restart
  • Personal requirements on satisfaction with timeliness of municipal notices
  • Information on no-notice audits of the various components and the relative weights of these evaluations in terms of what core problems need attention
  • Success criteria, and progress/outcomes with respect to this success
  • Anecdotes of deviations


  • Evaluations

    These are the types of evaluations the individual XML table participant could make when integrating XML feeds into their daily planning:


    Sample End-user evaluations

  • Adequacy of the infrastructure with respect to goals
  • Quality of life standards vs what actually getting vs what likely to experience in both a disaster and follow-on effort
  • True social costs when exposed to conditions


  • Decisions

    These are the types of decisions people could make who are applying this model of feed integration and XML tables to a disaster preparation and recovery effort:

    Sample End-User Decisions

  • Where to put money time, energy, focus

  • What to wait on, not purchase, delay for another day

  • When to move, change location, and restart


  • Based on the above factors, evaluations, and decisions, and results, XML table will know to change the feeds incoming, what missing, and what new methods and monitoring are needed to identify the holes and gaps in the information pipeline.

    Feed integration in investment banking

    Such a massive undertaking requires funding. To date, the relief efforts rely on contributions. However, there will be a time when debt-forgiveness subsides, and the investment bankers will appear to dovetail long-term development efforts with local requirements.

    The XML table and feed integration concept will do several things for those who seek to organizing the long-term financing.


    Sample Investment Banking Analysis

  • Identify decision points; identify likely problems when a written contract says one thing, but what is most likely to happen and be enforceable is something else

  • Contrast reported trends vs indicators; identifying new thresholds, baselines, and warnings to the suite of tools used to monitor investment risks [non-financial metrics to evaluate risks of default: Energy use relative to capacity throughput vs actual output vs reported output vs what the facility is reasonably able to produce].

  • Place weights to anecdotes; change weights given to sources increasingly questionable because of lack of corroboration from other sources, or no actual "next steps" as would be expected if the forecasted-trend and expected-momentum were real.


  • XML Table information for financial planners

    The concept could be applied to and integrate the following indicators and data into the XML table, and delivered via XML streams to the table, and back to the individual nodes.

    Sample factors

  • Industry analysis
  • Reliability of management
  • OpCash trends
  • Company-reported trends
  • Oil demand and energy consumption trends locally and globally as a check on the reasonableness of reported output and growth
  • "per cent-utilization of resources" provided in XML-streams
  • reported ship times
  • Anecdotes on the ship-time, storage, and offsite housing
  • Product sales vs product orders and backlogs.


  • Essentially the above indicators would identify back-logs in the logistics pipeline, and identify alternatives to transship.

    For example, the above indicators can be monitored to know in advance:

    "if we keep shoving goods into a port that does not have adequate fuel, or transshipment, we need to slow down-stop the shipments until we are sure we have helicopters in place to move the goods; otherwise, the goods have a high chance of getting damaged because they have not been packaged correctly to sit outside in an unprotected area."
    Monitoring XML table concept

    It's all well and good to come up with a big idea. The trick is to make it work. In turn, a plan to succeed also needs to be monitored to make sure it is actually working.

    This is where the XML table auditors come in. Auditors need to be involved in the XML table process. They will be the ones that need to understand the system, know the requirements, goals, and policies; and then be in a position to provide timely information to management so that corrective action can be taken.

    Ideally, it would be nice if the XML table concept could be used to integrate the status of this program; allow the individual components to self-report their progress toward the milestones; and then simply show that the XML table concept works as a means to provide two-way communication-statusing along the XML pipeline.

    Tasks for auditors

    Auditors will need work with system engineers to stress test the XML pipeline and table components.


    Sample Auditing Objectives

  • Adequacy of the feed
  • Internal controls
  • Information access
  • Timeliness of updates
  • Adequacy of closeout and rescheduling of efforts
  • Post-event auditing: Comparing goals to actuals, and documenting correcting action plans.


  • Auditors need to also be able to use the XML feed tools to do their jobs.

    Sample Auditor Uses of XML Table

  • Integrate feed requirements with policies
  • Identify feeds at odds with baseline requirements
  • Monitor the feed archive streams
  • Compare management indicators to monitor closeout
  • Compare change from the baseline
  • Ensure there is a system in place to maintain a requirements baseline
  • Use the baseline to compare changes, deletion, and destruction
  • IT archiving requirements met
  • Ensuring litigation and discovery requirements are met


  • In short, auditors need to look at the pre-post incident platform to ensure there is stability; if there are changes, need to have a system to identify the source and communicate this to both management and the XML table participants.

    Summary

    Overall, this concept relies on making sure that participants have access to information, even if they are not connected.

    The goals is to use existing statistical analysis to forecast trend significance; then quickly disseminate this information to all the nodes.

    Clearly, the above information will likely have no resemblance to the final model. But it will serve as a starting point to make changes, ask questions, and make some better decisions on "what would be better."

    The key is to get the participants in a better position to have access to trend information and allow them to make more informed decisions on how to use their time and resources.

    The concept relies on visual summary information; and will integrate the XML feed information after stripping out the information from even those most benign sources.

    The system overall hopes to give better visibility to "what resources to move first during a recovery effort" and ensure the necessary support equipment is in place to support that effort.

    Again, the key is to give visibility to all the players at the XML table so that bottlenecks and shortages can be more quickly corrected; and ensure that planned tasks are properly timed and ordered to more quickly get the system back to where it was, then moving forward to where it now desires to go.

    The concept will ensure that the needed predatory steps are in place prior to the major milestones. Until the final goods arrive, and before we expect a large boat to be replaced, we might set up a small training program to help people build wooden fishing boats.

    The concept will also make sure that there is a trace along the various delivery systems to track both the goods and the funds. Physically tracking the product to the final client; and in the early stages ensuring that the requirements are given visibility and appropriate priority. The concept would ensure that the intermediate steps are identified, planned for, and the status relative to these steps is known to facilitate completion and workarounds and alternatives.

    Because the basic needs are taken care of or planned for, this will give the decision maker more flexibility to think about the longer range plans such as whether there is enough usable land to plant crops, or what efforts must be taken to get rid of the sea water-salt left in the cropland; and also ensuring that there are plans in place to address future roads and back-up travel roads as the relief convoys cause understandable damage to the primary travel routes.

    The concept would also ensure that there is a system in place to match requirements and needs, similar to a stock exchange bid-ask system, down to the individual needs such as wood for boats, parts for desalinization tanks, or new efforts to more quickly provide petroleum on ships that refine the oil on the supertanker and immediately inject it into the disaster area with pipelines, support ships, or helicopters.

    It's doable

    They key is to remember how much has been done in just a few days since the Tsunami. We've seen many rise to the occasion. The money is there. The willingness is there.

    Major funding efforts are underway to move forward. I hope that this blog entry provides some thoughts on what role XML and EDI may play in preparing for the next disaster.

    With the right warning system and commitment, the loss of life could be zero. It just means deciding that this is the only acceptable alternative and making it happen.

    It can be done.

    Search terms

    Tsunami warning XML RSS feed Atom isolated remote disconnected unconnected
    Definition:XML interface

    Method of providing EDI-XML information to non-connected users.


    Reader Update: here

    Robert X. Cringley, Danny Ayers, and Bob Wyman assume XML will work for Tsunami Warnings. Excellent. Let's not forget those who have no access to XML, feeds, or the internet.

    Overview

    I recently blogged about building off Griffin's blog to create an XML table. The XML table would be a platform to integrate various XML-users into a single system. All the participants at the XML table would have two-way statusing: Being able to receive and provide inputs to all nodes connected to the XML table.

    Purpose

    This blog outlines expands on this concept in identifying specific data that could be used, transmitted along this table; and then addresses the question of, "How to provide this information to participants who have no access to the table."

    Application: Intended Use and relevancy to real world

    At this juncture, my primary objective in outlining these concepts is to provide additional inputs on the developing XML-related technology that could be used in disaster preparation and warning. Specifically, I hope to provide some thoughts from the "end user's perspective" on what XML-related products might be useful in both preparing for and recovering from a disaster, such as a Tsunami

    Problem: Inability to notify non-connected clients to information otherwise available electronically

    The problem with the current system is that those most likely to need assistance are least likely to be connected to the electronic information.

    Clearly, there are several non-electronic methods available. The challenge is in figuring out which methods would still be reliable in the absence of electricity or connection to the web; and at the same time remain reliable immediately prior to, during, and after the disaster.

    For example, one could suggest that a way to interface between the internet and non-connected users would be to use a system of bells, music, sounds, and visual cuing devices like colored smoke.

    One flaw with non-electronic methods is that they may fail immediately prior to the disaster. For example, one novel approach to send XML-EDI information to non-connected users would be to use homing pigeons to notify distant provinces of a critical event.

    However, this presupposes that the disaster-pre-event conditions will not impact the pigeon. There have been numerous reports that there were "no animals" in the Tsunami's wake, suggesting that animals were affected or "knew in advance" that there was an imminent problem.

    The immediate issue, with respect to using pigeons to transmit XML-EDI data to non-connected users is that the pigeons may very well get signals of an impending disaster and flee, never arriving at their intended destination.

    This is not to say that this is necessarily a problem. Because the flip-side of this "possible non-arrival" is that the non-arrival could in itself be a significant signal.

    Specifically, if future research into animal behavior just prior to a disaster could be further studied, the change in behavior could be the necessary signal of a problem.

    Regular timing of information

    If we use the analogy of pings and bit-torrents, we might arrive at another perspective. Specifically, when the XML systems ping each other, they are merely statusing "I'm here, this is the change."

    If a "non-electronic warning system" was one where the pigeons were regularly dispatched at pre-determined times, with known flight times, then a change in the arrival time, or the failure to show/appear could in itself be important information, so long as the baseline arrivals were reliable enough.

    This is to say that the baseline information of the XML interface system could be a regular dispatch of homing pigeons with small status messages. As soon as their arrival times change, or they fail to show, the participants would know to look into the matter deeply.

    Clearly, this requires research as to whether the "non-arrival of a pigeon, or number of pigeons over time relative to a baseline arrival time" would be something that could be reliably used to predict earthquake or Tsunami conditions.

    XML Table

    One idea behind the XML table is that the participants would have two-way statusing. Currently, the XML-EDI concept is focused mostly on one-way communication, providing content and information to the other end of the pipeline.

    However, under an XML-table concept, the participants would be able to communicate both ways. No longer would a feed-reader be an idle consumer, but there would be a mechanism to provide direct feedback to the XML-content provider.

    With respect to Tsunamis, the concept is this: An XML-reader would not simply review status messages. The XML-reader would also be able to provide feedback information to the central table on local conditions.

    Think of an earthquake monitoring system just after an earthquake. One method of evaluating the scope of the earthquake-impact using non-Richter information is to have various people self-report the damage: Books falling off shelves, broken dishes, windows cracked, cars moved, building foundations changed.

    These aggregate verbal-text-status messages can be roughly related to a Richter-reading. Again, the precision does not appear to be high.

    However, there is a mapping system used in law enforcement whereby specific crime scenes are mapped. Over time, similar crimes are consolidated to a single image. Using linear programming and statistics, there are proven methods to pinpoint both the time, route, likely origin, and possible next crime scenes.

    There are cases whereby bank-robberies were analyzed and law enforcement using this mapping technique were able to predict where the next robbery would occur, the time, and were waiting for the bank robber. Ironically, the robber failed to show up in their own car, used a taxi, and law enforcement was able to follow them to make the arrest.

    Also, this mapping system can also be used to pinpoint the likely home address of the criminal. This is to say that criminals tend to act in predictable ways, traveling certain distances, and making certain decisions about how far they will travel from their home. They have to move far enough from where they will be recognized, yet still within moving distance.

    These types of decisions affect the "drunken sailor's random walk" and help identify specific target, travel, time, and distance information to both provide predictive information and then trace back to the origin.

    Behavior and event mapping

    Let's apply this statistical mapping theory used in law enforcement to that of earthquakes. Going back to our anecdotal reporting of earthquake-effects, we could apply the law enforcement criminal mapping system to the earthquake reports.

    This is to say, that although an individual report may in itself not say much, over time and with many reports we get an aggregate picture of the entire earthquake or disaster area. Also, using the law enforcement modeling approach with enough damage reports we should be able to pinpoint the center of the earthquake, even if it is offshore.

    In other words, even though the individual nodes and people along the coast of Indonesia may not have any idea where the earthquake was [that they may not have known], they might have been able to self-report various earth tremors.

    If those reports can be then aggregated into a single map, the warning system will have another piece of information.

    Again, this is not intended to duplicate more robust systems that currently work in Hawaii at the Tsunami warning system.

    Rather, this approach is designed to complement the existing systems and be used in situations where there are no primary systems.

    Again, this simply illustrates that XML-content readers can provide valuable information to the XML table. The aggregate information is important in predicting behavior.

    Other analogies

    Other similar approaches include free-traffic reports. Drivers in their cars can use a free cell-code to provide reports of traffic accidents. Also, pilots and aircraft commanders have something called PiReps, which are Pilot Reports of conditions related to weather or other in-flight information.

    This is simply to suggest that current warning systems already do what we might wish to be done in the future: Free information is provided from a node to a central collection point.

    At the same time, the issue becomes then disseminating this aggregated information back to the individual nodes. Currently we have XML aggregators which do this nicely.

    However, in the situation of a Tsunami, the challenge becomes one of providing the information to the flight crews so that they can launch their colored-smoke flares and provide a larger warning.

    I have a solution. Dave Winer has reportedly been working with iPods to allow Satellite-transmission of audio information. Pilots in aircraft have commercial frequencies in their radios.

    The idea is this: What if the XML-tables were to have a special emergency channels through the satellite-iPod system with a direct XML-feed both audibly and in text to the C-130 aircraft commanders.

    Although an individual node may not have any information, the aggregate information delivered via iPod Satellite could provide timely information to both launch the aircraft, or set-off real-time smoke signals from the C-130 platform.

    For those of you who are lost, let me say it another way.


    Overal Concept


    The idea is to allow individual reports on the ground self-report earthquake information-anecdotes to a central location; this information is aggregated; and the law enforcement statistical analysis approach is used to pinpoint the earthquake origin, predict future events; and then this aggregate information is provided through iPod-Satellite to aircraft commanders who will launch colored smoke flares along the coast to warn those who have no direct access to EDI-XML.

    The smoke would then signal to the pigeon-owners to do something different; if the homing pigeons are regularly dispatched prior to the event, then their non-arrival will also provide a second data-point for those not directly connected to XML-EDI.

    In theory the satellite-notification system could be directly linked to regional and internal warning centers such as CNN or the BBC or even the American NORAD system. The system could provide valuable input information, and also provide a means to back-channel information.


    Clearly, the above is speculative. But it is intended to provide a complementary system when all primary communication systems fail, and the goal is to disseminate emergency information to those not connected to the internet and are not located within proximity to the existing information systems.

    Building a system which can do this

    Clearly, the next step is to construct such a system. Again, this is not intended to be the primary organizing method. Rather, it is intended to be a back-up system should the primary communication systems fail, and conditions are so bad that even amateur radio operators are unable to provide information.

    Griffin's blog in India is a baseline of information on "what types of information need to be transmitted." The information in itself is currently valuable. But researchers, XML-EDI developers, and recovery planners need to also ensure they dedicate resources to monitor summary reports of the types of information available, what information suddenly appears out of the blue, and continue to refine those mechanisms that will ensure there are systems in place to manage this information, and adequately time the requirements when the system is most capable of responding.

    This is another way of saying that Griffin's Blog begins to formalize the data-sets which need to be incorporated into future disaster recovery efforts. Specifically, what is needed is a method to ensure the priorities and critical requirements are prioritized; and at the same time there is a system in place to dovetail the known requirements with the incoming funding; and also aggregate these trouble reports and provide a wide prediction on the types of resources, needs, and future requirements that are required at the aggregate level.

    In short, we propose that Griffin's Blog be used as a source of information for constructing the system, stress testing the variables and responses, and then ensuring that the final system can adequately manage the scope of conditions reported in Griffin's Blog.

    Also, what is needed is an effort to ensure that the baseline data in Griffin's blog is aggregated to predict future trends; identify unfolding requirements.

    This is another way of saying "now we know what the problem is", we now have a situation where the XML program management tools can be used. In other words, as disaster-related information arrives, XML management tools can time the inflow of this information so that the most urgent needs are given greatest visibility; and at the same time, the less urgent, but still important requirements are timed and delayed yet still planned for.

    In other words, we use the incoming data to drive the management system. This is once of the speculated concepts behind XML feed integration. We no longer rely on set models and constructs to evaluate the incoming data; rather, we allow the incoming data to act as a check on whether the current model we are using to evaluate this aggregate information is working or needs to be revised.

    In short, although a single data point may mean "nothing," if we allow the single data point to challenge the assumptions of the existing XML-data system, we might have a realistic basis to say, "This one data-input, although isolated is telling us something important that our current assumptions would otherwise explain away."

    This was the problem at Pearl Harbor on 1941. The single data points were not allowed to robustly challenge the overall model and planning.

    This new concept would turn these assumptions upside down. This is not to say that we respond at every slight indication; rather, we develop a system that gives greater visibility and weight to those indicators of impending problems, and puts a greater emphasis on these small indicators so they are given priority in the communication system along the XML pipeline.

    Again, the aggregate picture will tell us what is going on at the aggregate level. Yet, an isolated node from a remote villager may be the needed warning that things are started to rumble far off coats, well before the system in Hawaii can analyze the earthquake and then provide a warning to the isolated fisherman on a boat far off sea.

    The new system would essentially act as a back-up in taking these isolated reports, aggregating them, and then quickly providing the summary information to those who can most quickly send warning signals using smoke flares, chimes, music, speakers, and even homing pigeons.

    Major efforts

    Moving from Griffin's Blog data to the final concept

    The following are the major activities needed to accomplish this effort

  • Monitor data/types in Griffin's blog
  • Proof of concept
  • Establish the connections within the nodes
  • Identify requirements
  • Coding EDI-XML and testing
  • Acquiring the hardware
  • Conducting the integration testing between XML and non-connected clients
  • Real life testing
  • During a simulated disaster who that the system can match the emerging, forecasted, and predicted requirements with incoming assistance, funding, and attention
  • Demonstrate that XML management tools can adequately prioritize needs and allocate revenues to needs, all the while timing the funding to meet long term recovery needs


  • Building the XML table

    The XML table is to allow individual nodes to provide input status messages on anecdotal events like earth tremors, books falling off shelves, and strange noises. This system would be free SMS-cell inputs.

    The XML table is a program management tool.


    XML Table

  • Two-way statusing of table and individual units [Bit Torrent]
  • Single-snapshot and picture of the entire area of interest, whether it be an XML pipeline or a geographic snapshot like LandScan.
  • Program management and calendarizing: Timing feeds, XML-EDI information to takes and events as the system is ready to respond.
  • XML table would strip out relevant information from the XML feed with an integrator; participants would use the aggregate information as desired for training, simulation, forecasting, and preparation.


  • Let's consider how the individual participants might use the information. If we look at a picture of the Indian Ocean, we can use this as an analogy for the table. All the people along the coast line and going in-land become "nodes at the XML table."

    All the participants have a chance to provide input, access to information and integrate the XML-EDI feeds as they desire; use the information for training simulations, requirements-resource matching; and statusing the system.

    Goal

    Going from Griffin's blog to the final concept means cataloging the type of information in Griffin's blog, and creating XML integration systems that can strip out the valuable information; and preparing a set of requirements on the type of XML table needed and ready to take similar inputs in a future disaster preparation and recovery effort.

    Disaster support

    Also need to make sure that recovery services are involved. Make sure they understand XML-EDI; and also that XML-EDI requirements in this system are calibrated to solve the end-client requirements at the time of the disaster.

    Analogy

    A central management tool like Drupal could be used to manage this program. Also, one analogy for "reporting the reliability of the character" would be Charity Navigator which would identify the various factors and indicators of the a various systems and approaches and ensure the XML developers knew what they were working with; and had a good understanding of the reporting reliability of the various components.

    What is different are the two-way statusing of the nodes or two-way communication along what is currently only a one-way XML-pipeline.

    Feed integration in disaster preparation and recovery

    The issue isn't simply looking at snapshots of information. But to compare along the feed pipeline various factors.


    Sample Factors of XML Pipeline and Table

  • how the data changed and why;
  • depicting the information in a summary way
  • Comparing the current summary presentation with other feeds, reports, summaries, and predictions on trends, origins, and likely specific next events
  • Drawing inferences about the reliability and credibility of the feed, not just the data but in terms of a stable, reliable source of information. For example, being consistently wrong and unreliable is sometimes more valuable information than being inconsistently reliable.


  • Testing

    For purposes of feed management, integration, and decision one may choose to ignore the rules and simply test the responsiveness of the system.


    Sample Responsiveness Criteria

  • Adequacy of response
  • Connection channels
  • Feedback
  • Effectiveness in situation-A vs what claim to be impossible in situation-B
  • Suitability of regulatory system with respect to standards, conduct, and oversight


  • Sample uses of XML Table platform data

    The next few paragraphs are some ideas on what might be some sample "day to day uses" of the aggregated-integrated XML table data. This is merely to discuss specific examples where policy makers may ask, "Give me some examples where this could be used all the time, not just in an emergency."

    This is another way of saying that the system, for it to be reliable in a disaster, needs to show itself to be reliable for every day use, and is a system that people are familiar with, trust, and regularly use. There's no sense showing up just prior to a disaster with a new technology that the clients are unfamiliar with.

    Feed integration for personal decisions

    Sample decisions individual participants could make using XML Table

  • Environmental risks
  • Forecasts on interacting with other players
  • Predictive information on time to acquire support
  • Trends on who is hiring labor to engage in relief work
  • Indicators on risks when interacting with various nodes
  • Adequacy of plans and preparatory requirements to interact with these realities
  • Developing personal plans to prepare and integrate these decisions into their day-to-day lives.


  • Sample of types of information that individuals may need

  • Baseline data on the reliability of the system
  • Anecdotes of tests and current system capabilities and flaws
  • Responsiveness of colleagues and peers to the current capabilities and likely requirements
  • Cost trends of replacement products for restart
  • Personal requirements on satisfaction with timeliness of municipal notices
  • Information on no-notice audits of the various components and the relative weights of these evaluations in terms of what core problems need attention
  • Success criteria, and progress/outcomes with respect to this success
  • Anecdotes of deviations


  • Evaluations

    These are the types of evaluations the individual XML table participant could make when integrating XML feeds into their daily planning:


    Sample End-user evaluations

  • Adequacy of the infrastructure with respect to goals
  • Quality of life standards vs what actually getting vs what likely to experience in both a disaster and follow-on effort
  • True social costs when exposed to conditions


  • Decisions

    These are the types of decisions people could make who are applying this model of feed integration and XML tables to a disaster preparation and recovery effort:

    Sample End-User Decisions

  • Where to put money time, energy, focus

  • What to wait on, not purchase, delay for another day

  • When to move, change location, and restart


  • Based on the above factors, evaluations, and decisions, and results, XML table will know to change the feeds incoming, what missing, and what new methods and monitoring are needed to identify the holes and gaps in the information pipeline.

    Feed integration in investment banking

    Such a massive undertaking requires funding. To date, the relief efforts rely on contributions. However, there will be a time when debt-forgiveness subsides, and the investment bankers will appear to dovetail long-term development efforts with local requirements.

    The XML table and feed integration concept will do several things for those who seek to organizing the long-term financing.


    Sample Investment Banking Analysis

  • Identify decision points; identify likely problems when a written contract says one thing, but what is most likely to happen and be enforceable is something else

  • Contrast reported trends vs indicators; identifying new thresholds, baselines, and warnings to the suite of tools used to monitor investment risks [non-financial metrics to evaluate risks of default: Energy use relative to capacity throughput vs actual output vs reported output vs what the facility is reasonably able to produce].

  • Place weights to anecdotes; change weights given to sources increasingly questionable because of lack of corroboration from other sources, or no actual "next steps" as would be expected if the forecasted-trend and expected-momentum were real.


  • XML Table information for financial planners

    The concept could be applied to and integrate the following indicators and data into the XML table, and delivered via XML streams to the table, and back to the individual nodes.

    Sample factors

  • Industry analysis
  • Reliability of management
  • OpCash trends
  • Company-reported trends
  • Oil demand and energy consumption trends locally and globally as a check on the reasonableness of reported output and growth
  • "per cent-utilization of resources" provided in XML-streams
  • reported ship times
  • Anecdotes on the ship-time, storage, and offsite housing
  • Product sales vs product orders and backlogs.


  • Essentially the above indicators would identify back-logs in the logistics pipeline, and identify alternatives to transship.

    For example, the above indicators can be monitored to know in advance:

    "if we keep shoving goods into a port that does not have adequate fuel, or transshipment, we need to slow down-stop the shipments until we are sure we have helicopters in place to move the goods; otherwise, the goods have a high chance of getting damaged because they have not been packaged correctly to sit outside in an unprotected area."
    Monitoring XML table concept

    It's all well and good to come up with a big idea. The trick is to make it work. In turn, a plan to succeed also needs to be monitored to make sure it is actually working.

    This is where the XML table auditors come in. Auditors need to be involved in the XML table process. They will be the ones that need to understand the system, know the requirements, goals, and policies; and then be in a position to provide timely information to management so that corrective action can be taken.

    Ideally, it would be nice if the XML table concept could be used to integrate the status of this program; allow the individual components to self-report their progress toward the milestones; and then simply show that the XML table concept works as a means to provide two-way communication-statusing along the XML pipeline.

    Tasks for auditors

    Auditors will need work with system engineers to stress test the XML pipeline and table components.


    Sample Auditing Objectives

  • Adequacy of the feed
  • Internal controls
  • Information access
  • Timeliness of updates
  • Adequacy of closeout and rescheduling of efforts
  • Post-event auditing: Comparing goals to actuals, and documenting correcting action plans.


  • Auditors need to also be able to use the XML feed tools to do their jobs.

    Sample Auditor Uses of XML Table

  • Integrate feed requirements with policies
  • Identify feeds at odds with baseline requirements
  • Monitor the feed archive streams
  • Compare management indicators to monitor closeout
  • Compare change from the baseline
  • Ensure there is a system in place to maintain a requirements baseline
  • Use the baseline to compare changes, deletion, and destruction
  • IT archiving requirements met
  • Ensuring litigation and discovery requirements are met


  • In short, auditors need to look at the pre-post incident platform to ensure there is stability; if there are changes, need to have a system to identify the source and communicate this to both management and the XML table participants.

    Summary

    Overall, this concept relies on making sure that participants have access to information, even if they are not connected.

    The goals is to use existing statistical analysis to forecast trend significance; then quickly disseminate this information to all the nodes.

    Clearly, the above information will likely have no resemblance to the final model. But it will serve as a starting point to make changes, ask questions, and make some better decisions on "what would be better."

    The key is to get the participants in a better position to have access to trend information and allow them to make more informed decisions on how to use their time and resources.

    The concept relies on visual summary information; and will integrate the XML feed information after stripping out the information from even those most benign sources.

    The system overall hopes to give better visibility to "what resources to move first during a recovery effort" and ensure the necessary support equipment is in place to support that effort.

    Again, the key is to give visibility to all the players at the XML table so that bottlenecks and shortages can be more quickly corrected; and ensure that planned tasks are properly timed and ordered to more quickly get the system back to where it was, then moving forward to where it now desires to go.

    The concept will ensure that the needed predatory steps are in place prior to the major milestones. Until the final goods arrive, and before we expect a large boat to be replaced, we might set up a small training program to help people build wooden fishing boats.

    The concept will also make sure that there is a trace along the various delivery systems to track both the goods and the funds. Physically tracking the product to the final client; and in the early stages ensuring that the requirements are given visibility and appropriate priority. The concept would ensure that the intermediate steps are identified, planned for, and the status relative to these steps is known to facilitate completion and workarounds and alternatives.

    Because the basic needs are taken care of or planned for, this will give the decision maker more flexibility to think about the longer range plans such as whether there is enough usable land to plant crops, or what efforts must be taken to get rid of the sea water-salt left in the cropland; and also ensuring that there are plans in place to address future roads and back-up travel roads as the relief convoys cause understandable damage to the primary travel routes.

    The concept would also ensure that there is a system in place to match requirements and needs, similar to a stock exchange bid-ask system, down to the individual needs such as wood for boats, parts for desalinization tanks, or new efforts to more quickly provide petroleum on ships that refine the oil on the supertanker and immediately inject it into the disaster area with pipelines, support ships, or helicopters.

    It's doable

    They key is to remember how much has been done in just a few days since the Tsunami. We've seen many rise to the occasion. The money is there. The willingness is there.

    Major funding efforts are underway to move forward. I hope that this blog entry provides some thoughts on what role XML and EDI may play in preparing for the next disaster.

    With the right warning system and commitment, the loss of life could be zero. It just means deciding that this is the only acceptable alternative and making it happen.

    It can be done.

    Search terms

    Tsunami warning XML RSS feed Atom isolated remote disconnected unconnected
    " />