29 March 2005

Tsunami and earthquakes: How the existing systems could be improved

-->
Quick tip Tsunami Model based on longitude and latitude.

Introduction

The recent earthquake off the coast of Indonesia prompted me to review the existing data feeds. I outline below some suggestions to improve the existing notification systems.

The proposal is to seamlessly integrate multiple feeds in a new way; and give users specific information about a discrete point along the ocean, with the illustrative links summarized in the comment.


Current limitations


The current sytems suffer from the following shortfalls:

  • Users asked to define in advance their preference on when to be notified based on earthquake magnitude. The real issue is water depth.

  • Users asked to define their area of interst along a broad range, rather than a discrete point. The broad range does little to assist users with specific information on what to do in the local area.

  • Users are also not given specific information on what time the Tsunami may arrive. Rather, users are simply given a general warning that an event has occurred.

    The following discussion may or may not be applied to a event-warning. Rather, it is hoped that is helpful as an illustration of a broader tool: That users could benefit by having a simple tool that allows them to import multiple feeds into a single platform; integrate the data; transform the data into new ifnromation; and then create new feeds that will be of use when the event occurs.


    Discussion

    XML Equation


    This tool would import XML feeds, and transform numerical data, into a third output.

    The focus of this tool would be a standalone platform, which users could upload public feed, and the platform would provide outputs in a third feed.

    Platform would take input values from two or more feeds; permit users to define the relationships between the values in two or more incoming feeds; then capture the result in a third feed.


    Application


    This tool would be useful when working with Tsunami and earthquake systems. Users currently have to rely on platforms that provide data which may or may not be useful.

    For example, the US Geological Survey provides data-feeds related to earthquake magnitudes; however, this feed does little to pinpoint the likely location or direction of the waves; nor does it translate ocean depth information into meaningful predictions about the probability, direction of a Tsunami.

    There are other models that rely on longitude, latitude, and magnitude inputs, but these systems do little integrate with external feeds; nor do they provide point estimates.

    Also, PubSub does allow users to define broad geographic regions, but these do little to pinpoint an exact location. Also PubSub like the US Geological Survey permits users to define quake magnitudes as a parameter to screen earthquake warning information. However, there is little guidance to users on what magnitude is important, nor the relationship between magnitude, ocean depth and proximity to land fall.

    The other weaknesses of the current systems is that the information is generally related to the point of origin, and does little to provide specific information on the expected landfall time, nor provide a window of how long users would have to react.

    Also, users get little information about what to do. This can be problematic in an unfamiliar location. Users would benefit if, once they received a warning of landfall, they received specific directions on suggested travel routes on a 3-D map both in their native language and fully synchronized with GPS..

    Other systems like GeoURL can take user inputs on specific Longitude and Latitude information. This approach could be applied to predictions about Tsunami waves, then tell users when the Tsunami would be expected to arrive at a specific point. Users would also benefit if they had the ability to quickly pinpoint others in the region who were similarly affected by the same event.

    This approach would overcome the weaknesses of the above three systems, and integrate them. Moreover, the approach would be applied more broadly to multiple data feeds and permit users to extract specific information, and integrate multiple feeds into a single black box. Users would then be able to tailor their specific needs to those data fields, then extract the most important and meaningful information.


    Benefits of the proposed method


    Users would get specific locations, not general; users would get a time-estimate related to a specific point.

    Warning system would relate risk of Tsunami to water depth; and the calculations would automatically be done.

    No reason to assign or differentiate feeds based on quake magnitude; the system would simply translate the existing information into a meaningful forecast about the Tsunami.


    Proposed approach


    Set-up When choosing a Tsunami warning feed, users would click on a location on a map. Users would define the specific travel plans they had. Users could choose either a specific location, a travel route, or broaden their interest to regions along the coastline.

    EventSystem would take magnitude, latitude and longitude information; plus error term and apply this information to maximum and minimum ocean depths in the area of interest.

    Calculations Once the event occurs, the incoming data would automatically get transformed: Taking magnitude, longitude/latitude; and variations in water depth to model the predicted wave patterns. The model would then forecast expected time to landfall to a specific point.

    Specificity Based on the users’ initial set-up and location of interest, model would specify the time that the waves would arrive at a specific location. Expected landfall time would be cross-checked against the likely traffic patterns and travel routes.

    Warning Message sent would include expected wave height, time expected to landfall, and the safest route. Warning message would override incoming phonecalls; automatically activate TV, phone, and radio; and would broadcast the message regardless whether user devices were on or off. User would provide a pre-approval to have the off switch automatically activated; or the phone to broadcast an audible tone despite the phone being still on the hook.


    Map Selection


    Users would input location on a map. Users would point on a 2-D map either a specific point or a region along the coast line.

    System would report magnitude, longitude, latitude.

    Model would calculate the expected landfall times.

    System would identify the wave flows and intersect points.

    Time and wave size data would be related to the target not the point of origin.

    Instructions given.

    Even occurs: Data: Magnitude, location

    Feed: System calculations magnitude

    System goes to model: inputs magnitude, longitude, and latitude.

    Model projects waves, sends message related to time to impact; and links to data on the magnitude and probable wave size.


    Instructions


    This system would allow users to get warnings despite local language differences.

    The incoming wave would be a function of time to impact a specific location.

    System would display map and direction, route to safety.


    XML Equation

    Transform multiple feeds into useful user information


    The existing feeds do not integrate, nor do they provide meaningful information about a specific location.

    The XML black box would integrate multiple feeds into a single system, and the user could define the relationships between these feeds.

    Users would be prompted as to the number of feeds they want to integrate. The system would then provide a single feed-input box corresponding to the number of feeds the user defines.

    Users would be given an input box to provide each individual feed. Users would be prompted to identify what information and data to extract from the feed. These would form one set of data for transformation.

    Each box would accommodate one feed. This would allow the user to see all feeds at the same time and verify that the extracted data and variables are correct. This multiple-feed-visible at one time would be used as opposed to the progressive feed entry into a single box.

    Users would be able to assign, relate feeds in an equation. Users would order the feeds and define which data to extract and apply what equation to subsequent feeds.

    This module would define the relationship. How the feed output values would intersect with other feeds data. This platform would provide multiple feed-entries.

    Users could springboard off what the current search engines do: Discrete numbers can be multiplied and divided with simple commands.

    This approach would allow users to place multiple feeds into a single platform like OPML but not just provide a list, but transform the feed-output-data into a new feed.


    Input Box

    Multiple feeds integrated


    This is a sample of what the user would be given. This box shows the number of feed-boxes the user would see.

    Notice that the feed-box for the feed-URI is larger than the variable-boxes.

    Each feed-box is large enough for the user to see the entire feed-URI. The box would automatically transform the feed into a character-designation; and users could either accept the default name or make a new one. Users would be able to see both the URI and character name assigned to that feed.

    The next box is the value box. This is the specific data that the user wants to extract.

    The third box is the operand or transformation function. Users would define how they want the specific variable [box 2] to be adjusted.

    Users would be able to input the same feed multiple times. Each box would permit users to extract new or different data from the same feed. Users would be able to order the feeds in any manner that they want. Users would click on one section of boxes and change the order that they appear. This would be a simple click, drag, and drop similar to how functions are dragged to the browser tool bar.


    Application


    Let’s take a look at how the XML Equation would actually work in practice.

    The system would take an output of XML feed in the Geological Survey. This information would relate do the location of the earthquake-event.

    Next, the system would combine this first feed with the mode inputs. In this case, the model needs longitude and latitude information.

    After combining the output values from the Geological Survey and the Tsunami warning model, the system would then extract the third feed: The picture of the Tsunami path.

    Next, the system would combine the output with previous information provide: The system would provide a time estimate for the wave arrival.


    Steps


    This approach would require the following steps. The system would take the output of feed in re the wave and velocity [Data, list]

    Combine it with the previous information provided [in re location]

    Arrive at a third result: Time of arrival.


    Another example


    Let’s consider another feature of the warning system. This involves doing sensitivity analysis.

    One problem with the existing forecasting models and feeds is that they rely on discrete information. They do little to work with a range of values in an area of interest.

    In this particular case, we know that the event occurred within +/- 2 Nautical Miles. Yet, our real interest isn’t where the event occurred, but the range of water depths at the point of origin.

    This section of the platform would accept the error-rate in pinpointing the exact location of the event. The system would take the accepted most probable point of origin [earthquake center], then broaden the coordinates into the plus or minus range.

    In this case the estimated point of origin is 2.01N and 97.01 degrees East; however if we include the error range of 2.9 nautical miles, we would find that this 2 NM number roughly equates to .05 arc seconds.

    Thus, the actual range of locations for the earthquake rages from 2.005 to 2.06; and 96.96 to 97.06.

    Thus, the two transformations required to accomplish this: Is to translate NM error rate into a meaningful longitude and latitude information. However, recall that on a map those longitude-errors are not the same-distance as latitude errors. The model would translate the error-rates into the correct differences.

    Also the model would then look at the range of water depths within this region.


    Summary User Requirements


    To summarize: The system would do the following:

    Take the output value of the geological survey

    Extract the error in nautical miles in a +/- value

    Convert nautical miles to arc-seconds and degrees

    Apply the range of +/- values as inputs to get an image of the rage of water depths for the area of interest

    Extract minimum and maximum depth values to do sensitivity analysis on the probability of a Tsunami that would arrive at a specific area of interest.



    HTML Boxes to transform this data


    To accomplish the above, there would be 3 clusters of feed-information. The purpose of this initial step would be to transform the Nautical mile +/- errors into [a] range values of degrees; then [b] input values to identify the image of the ocean floor


    Modules Which Transfrom Feed Data


    Nomenclature

    Box: This is a field to manually or automatically inject a feed into an HTML box

    Values: This is the data extracted from the feed placed in the box.

    Operand: This is the command used to transform the value to a desired new result. Could be a mathematical function, association, or comparison.

    Detailed boxes and modules

    Box 1 US Geological survey feed entered
    Values Nautical miles, +/- values determined
    Operand: Extract data, identify range

    Box 2 Output from Box 1 assigned to
    Image feed: Ocean floor depths
    Values extracted from image: Longitude and latitude footprint
    Operand: This feed would input the +/- values

    Box 3: Get image
    Calculate max/min Identify maximum and minimum values in impact; Look for maximum values by defining x-location where change in y reaches a maximum, or [change in y = 0] over ___ radians;





    Simulation


    Function: Do simulation on the data.

    We’ve transformed incoming data into a range of potential ocean depths, then assigned those variables to a new feed.


    Another example


    Let’s look at the same scenario, but from a slightly different perspective. Let’s transform the available feeds in a slightly different way.

    This approach would take the output of the US Geological Survey: Extracting magnitude and longitude-latitude information.

    Next, the system would apply as an input to the model: The magnitude and longitude information. Note that this step would be automatic, as opposed to the current approach of manually inputting the data and waiting for the simulation.

    Also, the seamless transformation of the data would mean that the user would not have to wait for the result. Rather than waiting for the data to arrive, then find the site, and wait; the calculations would be done real time to provide the warning.

    This phase would forecast the time to arrival. It would integrate the results with the GeoURL to identify the specific time of the even would arrive at a specific point.


    Steps


    Let’s take a look at the steps required to accomplish the above. Remember, the goal of this is to create a platform that will take various feeds, transform them, and convert the values into new meaningful information, which is reported on a feed.

    Ultimately, the goal is to seamlessly take input values, and provide user with specific information unique to their specific location, and then given them meaningful directions on what route to take for safety.


    Modules to perform these transformations


    Module 1: Prepare data for simulation

    Feed Input: US Geological Survey Feed URI, and name
    Data to extract: Longitude and latitude
    Longitude and latitude information is extracted and assigned to the second feed: The simulator

    Module 2: Integrate Tsunami simulation with discrete point

    Feed Input: Tsunami simulate
    Data of interest: Extracts from the US Geological survey the information on the longitude and latitude and magnitude and arrives at a point estimate for a specific location on earth. The forecasted condition at a discrete point is then matched with a map.

    Module 3: Calculate arrival time

    Geo URL This feed then takes the Latitude, longitude information, and identifies the time to arrival point; then compares the arrival time with current traffic.

    Module 4: Prepare message for user-subscriber

    The result: Users are given specific directions on where to travel based on existing conditions at a specific location.



    Summary


    What I’ve done is outline the existing shortfalls with the current Tsunami warning systems. The data and systems already exist. What’s needed is a simple platform that combines all the feed-data into a meaningful user report.

    Currently the US Geological Survey provides data on the earthquake magnitude and most probable point of origin.

    Also there exist systems that can forecast the Tsunami waves based on magnitude and longitude information.

    The current warning-feeds require users to make decisions about magnitudes. Yet, this really isn’t important. What users really want to know is, based on water depth, what is the chance that a given location on earth is going to have a Tsunami and what do the users need to do.

    This system combines the existing data-feeds into a more meaningful report. The system is seamless.

    The advance of this system over the existing models is that users define their feed-parameters based on what they know, now what is unknowable. Users do not know whether an earthquake is of concern based on magnitude. Rather, the real issue is whether the earthquake occurs in shallow, or deep water.

    This approach also looks at the range of water depths in a likely area. The current system appears to simply take a specific point and makes no account for the range of water depths in relation to the existing error rates.

    Also, this system allows users to define their area of interest to a specific point. It does not require a user to know the names of the regions they are visiting. Rather, the users can simply point to a map, and the system will then identify the probable time to impact of the Tsunami for that specific location.

    Also, this system goes further. Rather than simply warn that an earthquake occurred; this system would go further in defining the most probable time the Tsunami would arrive, and then inform the user of the safe travel routes.

    Again, users don’t care about Tsunamis or earthquakes. Users ultimately want to stay alive. This tool doesn’t give them information that may or may not be of importance. Rather, this integrated system would provide them with specific information on what the users need to do to stay safe.



  • Quick tip Tsunami Model based on longitude and latitude.

    Introduction

    The recent earthquake off the coast of Indonesia prompted me to review the existing data feeds. I outline below some suggestions to improve the existing notification systems.

    The proposal is to seamlessly integrate multiple feeds in a new way; and give users specific information about a discrete point along the ocean, with the illustrative links summarized in the comment.


    Current limitations


    The current sytems suffer from the following shortfalls:

  • Users asked to define in advance their preference on when to be notified based on earthquake magnitude. The real issue is water depth.

  • Users asked to define their area of interst along a broad range, rather than a discrete point. The broad range does little to assist users with specific information on what to do in the local area.

  • Users are also not given specific information on what time the Tsunami may arrive. Rather, users are simply given a general warning that an event has occurred.

    The following discussion may or may not be applied to a event-warning. Rather, it is hoped that is helpful as an illustration of a broader tool: That users could benefit by having a simple tool that allows them to import multiple feeds into a single platform; integrate the data; transform the data into new ifnromation; and then create new feeds that will be of use when the event occurs.


    Discussion

    XML Equation


    This tool would import XML feeds, and transform numerical data, into a third output.

    The focus of this tool would be a standalone platform, which users could upload public feed, and the platform would provide outputs in a third feed.

    Platform would take input values from two or more feeds; permit users to define the relationships between the values in two or more incoming feeds; then capture the result in a third feed.


    Application


    This tool would be useful when working with Tsunami and earthquake systems. Users currently have to rely on platforms that provide data which may or may not be useful.

    For example, the US Geological Survey provides data-feeds related to earthquake magnitudes; however, this feed does little to pinpoint the likely location or direction of the waves; nor does it translate ocean depth information into meaningful predictions about the probability, direction of a Tsunami.

    There are other models that rely on longitude, latitude, and magnitude inputs, but these systems do little integrate with external feeds; nor do they provide point estimates.

    Also, PubSub does allow users to define broad geographic regions, but these do little to pinpoint an exact location. Also PubSub like the US Geological Survey permits users to define quake magnitudes as a parameter to screen earthquake warning information. However, there is little guidance to users on what magnitude is important, nor the relationship between magnitude, ocean depth and proximity to land fall.

    The other weaknesses of the current systems is that the information is generally related to the point of origin, and does little to provide specific information on the expected landfall time, nor provide a window of how long users would have to react.

    Also, users get little information about what to do. This can be problematic in an unfamiliar location. Users would benefit if, once they received a warning of landfall, they received specific directions on suggested travel routes on a 3-D map both in their native language and fully synchronized with GPS..

    Other systems like GeoURL can take user inputs on specific Longitude and Latitude information. This approach could be applied to predictions about Tsunami waves, then tell users when the Tsunami would be expected to arrive at a specific point. Users would also benefit if they had the ability to quickly pinpoint others in the region who were similarly affected by the same event.

    This approach would overcome the weaknesses of the above three systems, and integrate them. Moreover, the approach would be applied more broadly to multiple data feeds and permit users to extract specific information, and integrate multiple feeds into a single black box. Users would then be able to tailor their specific needs to those data fields, then extract the most important and meaningful information.


    Benefits of the proposed method


    Users would get specific locations, not general; users would get a time-estimate related to a specific point.

    Warning system would relate risk of Tsunami to water depth; and the calculations would automatically be done.

    No reason to assign or differentiate feeds based on quake magnitude; the system would simply translate the existing information into a meaningful forecast about the Tsunami.


    Proposed approach


    Set-up When choosing a Tsunami warning feed, users would click on a location on a map. Users would define the specific travel plans they had. Users could choose either a specific location, a travel route, or broaden their interest to regions along the coastline.

    EventSystem would take magnitude, latitude and longitude information; plus error term and apply this information to maximum and minimum ocean depths in the area of interest.

    Calculations Once the event occurs, the incoming data would automatically get transformed: Taking magnitude, longitude/latitude; and variations in water depth to model the predicted wave patterns. The model would then forecast expected time to landfall to a specific point.

    Specificity Based on the users’ initial set-up and location of interest, model would specify the time that the waves would arrive at a specific location. Expected landfall time would be cross-checked against the likely traffic patterns and travel routes.

    Warning Message sent would include expected wave height, time expected to landfall, and the safest route. Warning message would override incoming phonecalls; automatically activate TV, phone, and radio; and would broadcast the message regardless whether user devices were on or off. User would provide a pre-approval to have the off switch automatically activated; or the phone to broadcast an audible tone despite the phone being still on the hook.


    Map Selection


    Users would input location on a map. Users would point on a 2-D map either a specific point or a region along the coast line.

    System would report magnitude, longitude, latitude.

    Model would calculate the expected landfall times.

    System would identify the wave flows and intersect points.

    Time and wave size data would be related to the target not the point of origin.

    Instructions given.

    Even occurs: Data: Magnitude, location

    Feed: System calculations magnitude

    System goes to model: inputs magnitude, longitude, and latitude.

    Model projects waves, sends message related to time to impact; and links to data on the magnitude and probable wave size.


    Instructions


    This system would allow users to get warnings despite local language differences.

    The incoming wave would be a function of time to impact a specific location.

    System would display map and direction, route to safety.


    XML Equation

    Transform multiple feeds into useful user information


    The existing feeds do not integrate, nor do they provide meaningful information about a specific location.

    The XML black box would integrate multiple feeds into a single system, and the user could define the relationships between these feeds.

    Users would be prompted as to the number of feeds they want to integrate. The system would then provide a single feed-input box corresponding to the number of feeds the user defines.

    Users would be given an input box to provide each individual feed. Users would be prompted to identify what information and data to extract from the feed. These would form one set of data for transformation.

    Each box would accommodate one feed. This would allow the user to see all feeds at the same time and verify that the extracted data and variables are correct. This multiple-feed-visible at one time would be used as opposed to the progressive feed entry into a single box.

    Users would be able to assign, relate feeds in an equation. Users would order the feeds and define which data to extract and apply what equation to subsequent feeds.

    This module would define the relationship. How the feed output values would intersect with other feeds data. This platform would provide multiple feed-entries.

    Users could springboard off what the current search engines do: Discrete numbers can be multiplied and divided with simple commands.

    This approach would allow users to place multiple feeds into a single platform like OPML but not just provide a list, but transform the feed-output-data into a new feed.


    Input Box

    Multiple feeds integrated


    This is a sample of what the user would be given. This box shows the number of feed-boxes the user would see.

    Notice that the feed-box for the feed-URI is larger than the variable-boxes.

    Each feed-box is large enough for the user to see the entire feed-URI. The box would automatically transform the feed into a character-designation; and users could either accept the default name or make a new one. Users would be able to see both the URI and character name assigned to that feed.

    The next box is the value box. This is the specific data that the user wants to extract.

    The third box is the operand or transformation function. Users would define how they want the specific variable [box 2] to be adjusted.

    Users would be able to input the same feed multiple times. Each box would permit users to extract new or different data from the same feed. Users would be able to order the feeds in any manner that they want. Users would click on one section of boxes and change the order that they appear. This would be a simple click, drag, and drop similar to how functions are dragged to the browser tool bar.


    Application


    Let’s take a look at how the XML Equation would actually work in practice.

    The system would take an output of XML feed in the Geological Survey. This information would relate do the location of the earthquake-event.

    Next, the system would combine this first feed with the mode inputs. In this case, the model needs longitude and latitude information.

    After combining the output values from the Geological Survey and the Tsunami warning model, the system would then extract the third feed: The picture of the Tsunami path.

    Next, the system would combine the output with previous information provide: The system would provide a time estimate for the wave arrival.


    Steps


    This approach would require the following steps. The system would take the output of feed in re the wave and velocity [Data, list]

    Combine it with the previous information provided [in re location]

    Arrive at a third result: Time of arrival.


    Another example


    Let’s consider another feature of the warning system. This involves doing sensitivity analysis.

    One problem with the existing forecasting models and feeds is that they rely on discrete information. They do little to work with a range of values in an area of interest.

    In this particular case, we know that the event occurred within +/- 2 Nautical Miles. Yet, our real interest isn’t where the event occurred, but the range of water depths at the point of origin.

    This section of the platform would accept the error-rate in pinpointing the exact location of the event. The system would take the accepted most probable point of origin [earthquake center], then broaden the coordinates into the plus or minus range.

    In this case the estimated point of origin is 2.01N and 97.01 degrees East; however if we include the error range of 2.9 nautical miles, we would find that this 2 NM number roughly equates to .05 arc seconds.

    Thus, the actual range of locations for the earthquake rages from 2.005 to 2.06; and 96.96 to 97.06.

    Thus, the two transformations required to accomplish this: Is to translate NM error rate into a meaningful longitude and latitude information. However, recall that on a map those longitude-errors are not the same-distance as latitude errors. The model would translate the error-rates into the correct differences.

    Also the model would then look at the range of water depths within this region.


    Summary User Requirements


    To summarize: The system would do the following:

    Take the output value of the geological survey

    Extract the error in nautical miles in a +/- value

    Convert nautical miles to arc-seconds and degrees

    Apply the range of +/- values as inputs to get an image of the rage of water depths for the area of interest

    Extract minimum and maximum depth values to do sensitivity analysis on the probability of a Tsunami that would arrive at a specific area of interest.



    HTML Boxes to transform this data


    To accomplish the above, there would be 3 clusters of feed-information. The purpose of this initial step would be to transform the Nautical mile +/- errors into [a] range values of degrees; then [b] input values to identify the image of the ocean floor


    Modules Which Transfrom Feed Data


    Nomenclature

    Box: This is a field to manually or automatically inject a feed into an HTML box

    Values: This is the data extracted from the feed placed in the box.

    Operand: This is the command used to transform the value to a desired new result. Could be a mathematical function, association, or comparison.

    Detailed boxes and modules

    Box 1 US Geological survey feed entered
    Values Nautical miles, +/- values determined
    Operand: Extract data, identify range

    Box 2 Output from Box 1 assigned to
    Image feed: Ocean floor depths
    Values extracted from image: Longitude and latitude footprint
    Operand: This feed would input the +/- values

    Box 3: Get image
    Calculate max/min Identify maximum and minimum values in impact; Look for maximum values by defining x-location where change in y reaches a maximum, or [change in y = 0] over ___ radians;





    Simulation


    Function: Do simulation on the data.

    We’ve transformed incoming data into a range of potential ocean depths, then assigned those variables to a new feed.


    Another example


    Let’s look at the same scenario, but from a slightly different perspective. Let’s transform the available feeds in a slightly different way.

    This approach would take the output of the US Geological Survey: Extracting magnitude and longitude-latitude information.

    Next, the system would apply as an input to the model: The magnitude and longitude information. Note that this step would be automatic, as opposed to the current approach of manually inputting the data and waiting for the simulation.

    Also, the seamless transformation of the data would mean that the user would not have to wait for the result. Rather than waiting for the data to arrive, then find the site, and wait; the calculations would be done real time to provide the warning.

    This phase would forecast the time to arrival. It would integrate the results with the GeoURL to identify the specific time of the even would arrive at a specific point.


    Steps


    Let’s take a look at the steps required to accomplish the above. Remember, the goal of this is to create a platform that will take various feeds, transform them, and convert the values into new meaningful information, which is reported on a feed.

    Ultimately, the goal is to seamlessly take input values, and provide user with specific information unique to their specific location, and then given them meaningful directions on what route to take for safety.


    Modules to perform these transformations


    Module 1: Prepare data for simulation

    Feed Input: US Geological Survey Feed URI, and name
    Data to extract: Longitude and latitude
    Longitude and latitude information is extracted and assigned to the second feed: The simulator

    Module 2: Integrate Tsunami simulation with discrete point

    Feed Input: Tsunami simulate
    Data of interest: Extracts from the US Geological survey the information on the longitude and latitude and magnitude and arrives at a point estimate for a specific location on earth. The forecasted condition at a discrete point is then matched with a map.

    Module 3: Calculate arrival time

    Geo URL This feed then takes the Latitude, longitude information, and identifies the time to arrival point; then compares the arrival time with current traffic.

    Module 4: Prepare message for user-subscriber

    The result: Users are given specific directions on where to travel based on existing conditions at a specific location.



    Summary


    What I’ve done is outline the existing shortfalls with the current Tsunami warning systems. The data and systems already exist. What’s needed is a simple platform that combines all the feed-data into a meaningful user report.

    Currently the US Geological Survey provides data on the earthquake magnitude and most probable point of origin.

    Also there exist systems that can forecast the Tsunami waves based on magnitude and longitude information.

    The current warning-feeds require users to make decisions about magnitudes. Yet, this really isn’t important. What users really want to know is, based on water depth, what is the chance that a given location on earth is going to have a Tsunami and what do the users need to do.

    This system combines the existing data-feeds into a more meaningful report. The system is seamless.

    The advance of this system over the existing models is that users define their feed-parameters based on what they know, now what is unknowable. Users do not know whether an earthquake is of concern based on magnitude. Rather, the real issue is whether the earthquake occurs in shallow, or deep water.

    This approach also looks at the range of water depths in a likely area. The current system appears to simply take a specific point and makes no account for the range of water depths in relation to the existing error rates.

    Also, this system allows users to define their area of interest to a specific point. It does not require a user to know the names of the regions they are visiting. Rather, the users can simply point to a map, and the system will then identify the probable time to impact of the Tsunami for that specific location.

    Also, this system goes further. Rather than simply warn that an earthquake occurred; this system would go further in defining the most probable time the Tsunami would arrive, and then inform the user of the safe travel routes.

    Again, users don’t care about Tsunamis or earthquakes. Users ultimately want to stay alive. This tool doesn’t give them information that may or may not be of importance. Rather, this integrated system would provide them with specific information on what the users need to do to stay safe.



    " />