16 March 2005

XML Concepts: Providing updates to single page feeds and reporting buried archives

Feed users need tools that more effectively [a] report content changes to deeply archived data; and [b] provide change notifications on single-spot-feeds.

Content remains valuable so long as the readers are able to access that information. Subscriptions and content providers become more valuable when [a] readers are able to get timely updates of changes; and [b] searchers can get tailored responses and custom-content updates.

Definitions

Spot feed: This is a feed assigned to a single, specific page. The feed only reports changes and content on a single URL.

Mirror feed: This is a polished version of a feed. Includes all current changes. It is a seamless flow of data to include all comments and revisions to a feed. This feed includes links to the detailed changes which are buried, but available.

Summary

We outline a user’s problem using a spot-feed to monitor single-page content-changes. Creating and monitoring a single-spot-feed is cumbersome and could be streamlined.

We also discuss the problem with updating deeply archived feeds and reporting those changes to the reader. There is room for improvement.

There are several approaches to more effectively report content changes to deeply buried archived information. Some approaches involve adjusting the ping platforms.

Other approaches adjust the system of pings and bots. More complicated versions involve integrating the reader and publisher platforms.


LEGAL NOTICE


Creative Commons License

This work is licensed under a Creative Commons License.

You may not copy any of this work to promote a commercial product on any site or medium in the universe.

If you see this work posted on a commercial site, it violates the creative commons license; and the author does not endorse the commercial product.

Free to use for non-commercial uses. Link to this original blogspot and cite as .



Introduction


I’ve been looking at putting feeds on single-pages, or making spot-feeds. It seems as though things could be a little easier than they are.

I’ve found a problem with both making single-page updates and getting single-page changes to report through a feed to the aggregators.

I wanted explore some options to update deeply archived single pages and suggest some methods to make sure these updates were reported through the aggregators.

This blog spot discusses the benefits of using single-page feeds; the current difficulties with working with single-page feeds; the challenge of getting single-page feeds to report to the aggregator.

I discuss several options to address this issue and outline a number of skeleton modules that might assist developers is seeing the details of what I am proposing. I also conclude with a brief outline of a test plan that could be used as a starting point to evaluate the final platform.


Enterprise brainstorming

  • What happens when the pings aren’t able to get the content, do we go back to bots?

  • What if there was a way to link the publish-command with an automatic-link with the subscriber?

  • How do we create a system that identifies content changes to a single buried page to those who have already expressed an interest in the original content?

  • How do we make sure that the feed reader-subscriber is given notice of a change in content, even though the current ping system doesn’t provide information on deep content changes in the archive?

  • What is physically going to prompt the reader to review information in the feed that is related only to content changes when the ping-system and aggregators neither report nor display subtle changes in content?


  • XML Single Page Update

    Targeted updates to subscribers


    The thing about feeds is that you can get a lot of content. What happens when you find a specific page you want to monitor?

    Right now, the XML support tools are geared toward providing updates on new feeds and content. They’re not geared toward specific pages and minor changes.

    Let’s say you’re running a site. You want to report changes and updates. You can add additional information. Just as long as your current information is at the top. And you ping.

    Small problem. What if you don’t work that way. What if you provide running updates at the bottom. And you are piling on new content and information in your feed.

    Who’s going to find your updates? The ping services don’t. You have to use special pings. Set up special feeds.

    There could be a way. What could be done is include an option in PingOMatic. This would let users link a specific page with a specific feed.

    PingOMatic could be enhanced to include a separate box allowing all specified pages to be repinged and reported to the aggregators. Or the services could automatically provide a batch list of URLs to be reviewed.

    Users could have the option to update their entire feed. Or they could have the option to include specific pages within that feed to get repinged. It would be seamless.There are many ways to do make this more complicated than it needs to be.

    What blog publishers would like is a way to have a single page with running commentary on a very focused topic. Rather than require multiple blog-spots with multiple links. It would be far easier to direct readers to a single page with the updates.

    For example, if I have a running commentary on a specific platform I am monitoring, the entire world doesn’t need the ping of an update. The only one who really might be interested is possibly the CTO.

    So why ping the world, especially when, at most, there might be one interested subscriber to that single content-page.

  • Why would someone want to have a spot feed and not just a blog feed?

    Spot feeds need to be distinguished from blog or site feeds. A spot feed would be something that is part of the main feed, but it is focused on a single page. A spot feed would be needed if we are updating a change page for a specific audience.

    The blogosphere as a whole may not necessarily follow the spot by subscribing to the changes. Rather, only a very narrow audience of interested readers would pay any attention. These specific readers might want to have a simple way to subscribe not just to a feed, but to a running-commentary within a single blog-spot.

    In this case, a spot-feed is created because we have known subscribers who want the original feed or content, and the end-readers are interested in a very specific cut of that information.

    For example, if someone from Newsgator comes across a discussion of aggregators, it’s a distraction if the spot-content changes are tailored to Rojo. In turn, the world doesn’t necessarily need to know the minor details Rojo status, but it is still good for the world to know that someone has raised the issue, saving multiple submissions.

    A spot feed could also be useful if we are preparing a confidential feed within a special blog, but want to maintain the overall blog or feed. We might want to change the access, publicity, and comments about a specific spot or blog-entry without calling attention to the entire feed.

    Blogspots can sometimes serve as a platform to have a running tally of data on a very narrow event, discussion and tailored to a very specific topic or group.

    Remember, not all feeds need to have separate channels. It might be nice to have a spot-feed when we are making updates to a very special page that has a high priority, but a very limited number of users.

    The current problem with single-page or spot-feeds is that they still generally require pinging for the content to get reported to the final aggregator; but this is of little help when the single page is buried and the PingOMatic-services never find it.

    A URI can be defined and localized to not only the publisher, but also the content-reader. Spot feeds could build off this relationship; focus the content updates to those who are most likely to be interested in the content.

    Spot feeds could be more than something that the content provider/publisher generates; readers could also create spot-feeds to monitor the ongoing changes in a specific page, within an overall feed.

    Spot feed benefits

    Reduce the number of site pings. If the likely reader-target-audience is very narrow, there’s no reason to ping all the services. Only the subscribing audience would get specifically notified of the update. Indeed, everyone would still get notified that the page contained X-content.

    Reduce the broad-pings to aggregators. In those cases where there are just a few subscribers to a spot feed, it might be more effective to simply target the individual reader with a ping, rather than go after all the services.

    Pings vs. Bots

    One reason for PingOMatic


    The idea behind bots was to auto-find things. But then someone came up with the idea of a ping so that the bots didn’t have to go around looking at the same unchanged pages.

    However, there’s also no need to flood the services with change-pings when the number of interested readers is much smaller.

    There could be a reordering of the pings, or a system that integrated the pings with bots so that there was a balancing.

    Content providers are interested in knowing who may be looking for the content, and then targeting that content in future updates. Content providers creating changes have different priorities, and want:

  • Current links to original content

  • Aggregators listing the original link

  • IP sites that have looked at content before changes

  • Those sites having issued search commands for content that is new or added

    What’s needed

    A system that will recognize that content-consumers need specialized tools that can deliver them focused change page updates; and also timely notify them when these changes are available.

    Changes

    Integrated with the original feed


    In the old days of blogging, users were directed to the exact words with strikeouts. I’m not sure if that needs to continue. Last thing I want to read is someone’s hen scratching. If I’m reading a blog update, I want it in camera-ready format.

    If I’m interested in the edits, I’ll ask and I’m sure there’s a way to have them delivered.

    What I’d really like is a system that gives me the final picture of any webpage that gets updated, not matter how deep it is buried. And, later if I’m interested, I’d like to have my aggregator give me the trace on what the original content was and how things have changed.

    The trick will be to ensure that the aggregator is doing exactly that: Not simply aggregating many feeds into one; but aggregating the various versions of a specific blog-spot into a combined update.

    Call it what you want. I call it a mirror feed. It’s not the actual feed, but it’s the polished version of the original feed, and it has the latest and greatest of everything.

    At a second level are the document traces on what the specific words that were changed, repurposed, edited, and which links got updated or changed.

    The current problem with the feed-aggregator-update-ping-search tool system is that the buried feed-content doesn’t easily get identified and updated. It might, but it’s not working on what I can see in the online versions of Newsgator and Bloglines.

    Who wants to read in their aggregator the ongoing minor changes related to some obscure item? That’s fluff data. It’s not been organized into a final product. It may be relevant to the analysis, but when I’m looking at someone’s feed I don’t want to have to put all the pieces together.

    I want the aggregator to aggregate: Aggregate the pieces and combine the small blogs into a final coherent form. Whether the aggregator automatically does this or the publisher does it I personally don’t care.

    So how are we doing to do this? What’s needed is an easier, proven way to convert a single webpage into and XML feed; and then have a simple update on that single page report to the aggregator.


    Summary test results


    I’ve done some testing. I’m not seeing that the single-page updates are working. Rather, what I see is a system that is focused on channels, feeds, and subscriptions.

    It’s not a system that is focused on ensuring that detailed information is updated; nor that buried content is aggregated into a coherent form.

    Think of a word processor. Remember those days when you could look at a document and have no clue what changed? That’s right. Now we have something called wikis.

    The middle-version was having a document change long which showed the red-lined version of the document. So you could see the exact changes: Who made the changes; when; what got changed, added, deleted, or stayed the same.
    Changed: Blue
    Added: Green
    Deleted: Red
    Stayed the same: No color.
    Guess what? If you have an aggregator that doesn’t display colors, you’re out of luck. Can’t see the various versions. Doesn’t make sense to have a protocol based on colors when the platforms don’t support colors.

    What’s it going to be? Let the changes drive the aggregator; or come up with a new way to display change-traces without using colors?

    Wow: Strikeout -- That’s a fun command. Show me stuff that changes. How about burying that? And please don’t make me write codes to do that.

    Hay! Let’s have an aggregator that does all this. You know, takes the new content and reformats it.

    Wow. An aggregator that actually integrates with the blogger-wordprocessing tools. But does more than let me blog and publish; but actually aggregates content into camera ready products.

    This tool is something that integrates aggregators with wordprocessing updates. Think of it as a mechanism to actually provide an update to the readers that’s quality, integrated, and something that is focused.

    With this tool you can look at a single change-page related to a focused product, and have a running tally. Think of it as your private wiki. But it’s something that can be injected into a feed.

    It’s a mirror of what you already published. The tool allows traceability.

    Think of an umbrella. Under that umbrella you have a main handle. That is your main feed. And also, going into that umbrella are the needed supports to that main handle and umbrella.

    The umbrella feed is your mirror feed. It includes the main camera-ready spruced up version of your feed, plus all the minor details, changes and updates. It’s all related to the content.

    It needs to be packaged together. And the services need to be ready to support this expanded capability and feature.

    There are a number of ways to actually accomplish this feat of magic. One is having an integrated system that pings the site.

    Here’s the problem. When a publisher provides a single-page update in content, but that content is buried, there’s no way to go back and specifically target both the FeedMesh and PingOmatic of that single wegpage update. Yet.

    Who suffers? Searchers may not get access to the details.

    If Technorati has taken a snapshot of your page, but allows no further updates to buried content [because the PingOMatic only looks at the current page], then reader-searchers are suffering. They’ve got a search with specific words, but they can’t find your content.

    That is unless you use this other search engine-tool that is not integrated.

    Why not bring it all together?

    The idea of pings was to reduce the number of bots running around. The idea was that publishers would know what was going on with their feed. The aggregators would get notified. And the people that subscribe to your feed would get alerted.

    Now you have a lot of feed subscribers. But the number of subscribers to a single page might be substantially smaller. What if there was a way to tailor the update so that not everyone was notified of the page, just those who had expressed an interested in that page, the content, or have communicated through the search engine they are looking for something.

    There’s no reason to ping the services when they don’t take a ping from buried content. Just as there’s no reason to ping an aggregator where none of the subscribers read your content.

    The answer is to have a focused ping. Something that will ping just your readers.
    The problem occurs when you’ve got an update, but the system doesn’t report that change: Either as new content in the feed; or as an increment or notification in your aggregator that something has changed and you might want to look at it.

    Sure, if you go to the content and link to it, the feed says it’s updated. There’s new content. But there’s nothing showing up in the aggregator that a particular page has been updated with buried content.

    The user has to know to go back and check the page for updates. But that defeats the purpose of having XML.

    Known information

    Consider what we know:

  • A. The subscriber has already communicated they are interested in the content.

  • B. The URL and URI are of interest to a particular aggregator and subscriber.

  • C. The subscriber relies on pings to get notified of changes.

  • D. Although not all feeds require a ping to report content*, page-updates get buried.

    [* = Aside, surprising test result: Atom feeds from blogger still report changes to Newsgator without a ping through PingOMatic]

    More specifically, there is data that can be used to tailor the pings to [a] publication and [b] updates of deeply archived content. We have known, traceable information, such as:

  • identified, confirmed, and interested subscribers;
  • a confirmed and working feed;
  • a channel between the content provider and interested subscriber;
  • an active aggregator that continues to showcase your content for the reader;
  • your original URL;
  • your feed URI;
  • a ping-back when the reader subscribes;
  • a confirmed number of hits and time on feed; and
  • data on the number of times they followed your feed back to your original platform.


  • There’s an existing relationship. A confirmed channel. And information is already going back and forth. Why do we need a ping; couldn’t we rely on something less than a ping to notify the aggregator-subscriber of an update in buried content?

    Sure. We could lower the threshold to something less than a ping. Something that looked at what the publisher was doing, independent of the ping, and then stand ready to catch and report that update on a buried page as it is published, not pinged.

    It would be nice if there was a method to have a lower threshold. So that further page updates did not require pings. Rather, once we have a confirmed-interested reader in that content-page, then the auto-commands should no longer have to go through PingOMatic or the FeedMesh.

    Rather, what would be nice is once that publisher already has a confirmed reader, that the reader’s aggregator would then go to a different monitoring mode. One that simply detects a simple publish-command. The aggregator already has the link; the content-reader is already communicating.

    We know that when we publish content that that content will show up, even though the formal tools like FeedBurner will not get it.

    Problem with single-page updates: Updates not easily available

    If I update my blog, the XML feed will report when I publish. What doesn’t happen is that the services are not notified. But even though Newsgator doesn’t get an official ping, guess what: My blogger-xml feed still shows and update on the atom-feed, and that update shows up.

    Only once. And this also occurs in Bloglines. To get an additional update, I have to go back into the feed, click on it, and get the information. Again, that defeats the purpose of XML feeds: Why go back to the original content to get the information we’re supposed to be able to have delivered.

    Even through the feed is connected, and there’s a clear-clean flow of information between the original publishers, along the XML feed, along the URI, into the link, there’s nothing telling me that there’s an update. I don’t see an increment in the aggregator.

    But there’s nothing telling me that the feed has been updated. I have to go check. This is the same as going to the website. Which is the same as saying despite all the XML-stuff, it doesn’t work in all cases.

    What to do? There are a number of options.

    Single page feed support


    What’s needed is an online tool that both:

  • A. Reliably converts single webpages to an XML-feed; and

  • B. Ensures that that feed URI can be easily loaded to an aggregator that reports content.

    Also, the PingOMatic needs to be adjusted so that single-page-updates are incorporated under a feed.



  • Easier monitoring


    Third, aggregators need to have thresholds allowing users to have a higher-monitoring of those fees that are already subscribed. A buried content-update in an ancient-blog should get reported more easily than a new feed from an unproven, non-subscribed feed.


    Aggregating changes

    Fourth, the aggregators need to have an aggregation function: To show and aggregate the updates and revisions between blogupdates. Think of this as combining feeds and word processing updates with wikis, but using an aggregator to show the changes.

    Again, the changes are not as important as the buried content. I want content more than I want to know which changes occurred. Put the emphasis on writing code that will report content changes, and then I might be impressed that XML can support a strikeout code.

    What’s needed is a method to showcase this mirror feed, while having a fast and quick way to get access to the detailed changes. So those who want to look at the strikeouts can really feel like they’re noticing what’s important.

    Publication-based monitoring

    Fifth, what’s needed is a method to downgrade the reporting-requirements-thresholds on the feeds. Move from simply relying on pings, to one that auto-links an update based on publication.

    Again, once a reader has confirmed they want to subscribe to a feed, this should lower the threshold to get updates and make it easier to access, download, view, and got notified of these revisions.

    Use the existing channel

    Sixth, there needs to be a system that links content with aggregators in a more structured way. By this I mean something that is a more solid pipeline between the content providers and the reader. Once the reader confirms they want updates, then the URI-itself should be the mechanism to transfer information about the updates.

    Think of the URI as a portal or gate between the content provider and the reader-user. Once the reader confirms they want content, the aggregator should then say, that they have a more solid link between the reader and the content provider.

    Thus, information should flow more easily back and forth along this. This takes us back to the idea of bots. Pings were designed to replace the bots. But what if the aggregators were linked through the URI to the publication system, and bypassed the ping-updates.

    Remember, we’re talking about a reader-user who has already communicated to the aggregator that they want this content; so there could be a lower threshold on whether a bot is deployed.

    I’m all for cleaning up the web and making things more efficient. But if we have a confirmed link between content provider and a publisher, then why not have as a tradeoff a lower-threshold on whether the content is published-reported as an update in the reader’s aggregator.

    Rather, what’s needed is a system that says, we have a confirmed connection between the content provider and reader; so although we would like to focus on pings and get rid of many bots, let’s now have a bot-deployed when the content provider publishes at any level of the feed.


    Pendulum: Bots vs. Pings

    Imagine a pendulum swinging back and forth between two paradigms: PingOMatic and bots. Perhaps there are special situations when bots would be more appropriate. They can be targeted. They can be deployed without a ping. And they can reliably go after specific things.

    PingOMatic, on the other hand requires, simply, a ping for it to work. The problem is: There is no ping when the service doesn’t support a ping-related-to-archives. And there is no ping when the system cannot drill into the content to detect the changes. Tag: Feed drillers

    When we have an update on a single page, perhaps PingOMatic needs the assistance of some special bots. Maybe, in order to reliability report content changes in the ancient archives, the aggregators and publishers need to have a system that integrates their commands. That way the reader’s aggregator could do some burrowing and save the services from having to support these specialized tunneling pings.

    What could happen is this. The bot would go out to the confirmed subscribers and say I have an update here. This bot could act outside the current system or be an addition.

    What I am proposing is that there be a more effective way to communicate an update of a webpage update that is both buried, and targeted to those who have already expressed an interest.

    Click-on-content auto-reports

    One approach is to take the task out of the aggregator and ping service. Simply make the [comparison between old and current feeds] a function of [the user clicking on the site]
    A user accessing their aggregator could be the signal for the aggregator to compare old and current feeds
    Again, we have known links to specific subscriptions. What could happen is the user would drive the aggregator to find and report these special change pages only when the user activates the aggregator.

    The aggregator, when physically opened, would take that sign-in as a command to look at all the feed-URIs in more detail. The manual-sign-in would be a signal for the aggregator find the most current status and content revisions on the deeply-buried archives.

    The user-triggered-approach could trigger the aggregator to do a comparison between the current feed-version in the aggregator, and the content-updates available outside. This approach would summarize the changes since last review, not since last publication or update.

    In other words, the cycle times between publication changes and content reviews may be out of synch. The aggregator correctly reports publishing changes, but appears to do a poor job of reporting changes in content since the last reader-review.

    The benefit of this approach:

  • The reader-subscriber would get the most up to date information on the content changes
  • There would be no reason for archived pages to get missed
  • PingOMatic would not have to get updated
  • The aggregator would not become a bot, but a focused-feed-monitor

    In short, the benefit of this consumer-triggered approach would be that it would be a real-time ping on the current content status. This ping could be transparent, and part of the sign-in procedure. This would be a demand-driven ping, not the publication-driven ping currently in effect.

    Summary report of changes


    There’s also another consideration. For sake of cache and ease of content management, one alternative to this approach would not to send content, only an alert of changes.

    Again, the readers have already signaled they are interested. Why not save bandwidth by requiring the readers to click on the link—they already want the content, so the link is just a formality.

    Philosophically, this approach is not consistent with the XML approach of providing both links and content to the reader.

    However, it remains to be understood through both simulation and research on the expected content volumes whether there would be any savings or benefits to this approach.

    Publish-based ping


    Another approach to monitoring the changes on a specific page would be a notification based on publishing. Again, this is to be distinguished from notification-based-on pinging.

    In this approach, the details would not be gathered until the user clicks on the window. These methods fall into general categories:


    Summary Approaches


  • 1. A system that is [publish EQUALS notify-subscribers-of-change]

  • 2. A system that allows multiple entries in PingOMatic; moves from a single feed update, to one that allows updates to any page at anytime.

  • 3. A mirror feed that is the polished version of the feed.

  • 4. Auto-searches from the aggregator based on content.

  • 5. An aggregator that displays current feed version with the request for information, not based on a feed-ping.

  • 6. Click-on-content auto-reports.

  • 7. Sign-into-aggregator auto-pings publishers for updates.



  • Commands


    What the user needs to be involved in with the settings is some greater fidelity in the auto-notifications. If the user wants they should have the option to say, I have a confirmed interest in this feed, so I want to have content delivered to me, even if the bots and PingOMatic don’t know about it, can’t find it, and the content is buried.

    The user should be able to say that they:

  • Grant permission to get updates based on non-pings, just publications
  • Do not want to wait for a ping to get an update.
  • Want their system to act as an auto-bot to more closely follow those URIs that have a confirmed subscription, regardless whether or not there is a ping.

    As an analogy, let’s remember why the subscriber is looking at blog-comment feeds. These are special feeds that run parallel to the main comment feed. Ideally, these feeds would be integrated so they form an integrated line of information around topics, blog entries, or other tags.

    Archived and comment feeds are useful to compare. Comment feeds tell us, among other things something about the users.

    Updates


    Users want to be part of the updates. In a similar vein with respect to archived content changes, the user wants to be able to extract data when the new content is published, to link-lock with the URK with the publish command, and get new stuff, changes, and additions.


    How things fit

    Users want to see how the additions relate to the original content. Thus, when we look at archived data, the user wants something that is going to integrate their current changes with the archived versions. The user would want to customize this feed.

    The users would be given the option to format their mirror feed [the finely polished summary feed with all revisions integrated], and the types of information they want [if any] on the raw feed [showing the strikeouts, changes, comparison between earlier and later feed changes].

    Mirror Feeds FAQs


  • Specifics: What is a mirror feed?

    A mirror feed is something that is a polished version of a feed. It is an aggregate of all the changes and comments into a streamlined, highly refined feed. All subsequent changes are updated, incorporated.

    The mirror feed is different from the main site feed. The main site feed is simply a number of entries that may or may not include all the changes.

    The mirror feed takes these updates, and includes them in final form. The main site feed is supported by several comment-feeds and change update feeds. The mirror feed is like an umbrella, aggregating all this information into the most recent version of the feed.

  • Steps: How does a mirror feed work?

    The mirror feed simply takes the most up to date information and presents it. The mirror feed strips out all the changes, and provides the most recent version.

    These changes are assigned to the change-trace cache where readers can reference them.

  • Demonstration: How does the mirror feed report and publish content changes on a single page?

    Working with single pages or spot-feeds can be interesting. The first problem is to assign a unique URL, then link that URL with the original feed.

    What the mirror feed does is tag the original content, assigns the changes to a separate file, then looks at the time stamps associated with each change.

    The most recent changes and time stamps take precedence.

    The mirror feed first looks at the original feed, and then incorporates each change step by step. These final changes are tagged with greater weight. During subsequent reviews, these later changes are referenced first.

    In time, there may be an interest in reversing the steps and retracing the changes in the feed. There is an undo-function in the feed as users want to strip away the versions. Users will be able to state specifically how much they want to back-step the change-long in the main feed, just as they can tell a browser to increment a URL by X-numbers with a one-click command.

    When working with a single page, the mirror feed receives a notice that there has been a change. The aggregator-bot will then drill into that URI, compare the most recently read content-version with the changes, and then show the differences.

    Once the user marks the changes as all read, this will time-stamp the user-aggregator's change-log. The next reader visit will start the review at that particular time.

    Once the publisher issues a command to open the file, this will act as a signal to the aggregator that a potential change is coming. The aggregator will then jump to the other steps to ensure that potential conflicting tasks and priorities are cleared out.

    The mirror feed will then detect that the publisher has closed out the content with changes. The signal will be the publish-command. The mirror feed will then go into the feed, look at the protocol, and seek the publisher’s batch file of the comment.

    The aggregator’s bot will compare the reader’s time stamp [when last reviewed] and compare it to the publisher’s time stamps on the changes. The changes in the publisher’s content since the last-reader-review will be grabbed, and brought into the readers aggregator.
    When the user-reader is done reviewing the updates and changes, they will signal that all has been read. This will once again stamp the aggregator with a code.

    The users can choose to back-step along each publisher change, or since each reader-review to see the changes that have occurred. The reader could either look at the changes since the last publication date, or the changes based on the reader’s review of those changes.

  • Proof: What coding objectives are met and how is the user ultimately served by this approach?

    See the modules below, which are designed to work without either duplicating current methods; or creating inefficiencies by over engineering something of no value.


    Comparing current approach to the mirror approach


    Currently, the receiver creates a subscription list of what they are reading. Under the mirror approach, the receiver registers with the publisher:

  • I’ve read X-content-changes on this date;
  • If there are updates let me know without me or you asking;
  • I want to be notified even if servers, bots and pings do not occur; and
  • Here is my permission to submit data updates with only a publish command from you.

    As you can see there is a greater level of trust here between the content provider and the subscriber. There’s more data flowing back and forth without outside intervention or notification.

    With the mirror fees, a reader-user can choose to be clear, specific, and open with their project and search objectives.

  • If they’re using a two-way feed search tool, the information would be included as an attachment to the mirror feed.

  • The project objectives become key terms to create frameworks for the content providers to tailor content, updates, and changes

  • Publishers can be in a position to add substance to the reader-searcher’s emerging search structure.

    Mirror feeds can also be used in other ways


    When you think of a mirror feed, don’t focus exclusively on a feed or an aggregator. Think much larger.

  • Mirror feeds give the publisher a narrow number of choices to focus follow-ups

    The nice thing about a mirror feed is that the publisher can create updates for the specific mirror-request. The mirror feed is also something that the content providers can compare the search requests to see how the reader-searcher’s search-requests are getting filled in terms of searches, blogrolls, and feeds.

    The publisher would then look at that mirror feed, compare the searches, and tailor the responses and content changes to ensure that the search is getting addressed.

    Mirror feeds also give the publisher tools to choose subscribers with greater information. Publishers can target specific audiences with content, and get feedback on the feed-searches whether that content revisions are attracting the desired reader.

    The mirror feed moves from a broad array of vague feedback [analog-ish] to more discrete feedback [digital with multiplexers hitting specific content, changes, and searchers].

  • Mirror feeds can be used by the publisher to track searches, content request, and then direct information to that segment. This is a push-focused aspect available when integrating aggregators and search engines in a two-way search.

  • Mirror feeds can be used to by-bass PingOMatic.

    Using a mirror feed means there’s no need to update services that don’t take updates, or whose readers are not subscribing to that feed, content, or spot-feed.

    There’s no need to send pings to all services when only one person is reading the information. By sending the mirror feed once to that reader, not all services will get notified.

    Clearly, the problem occurs when services are not getting pings with updates, but later subscribers show up looking for content and changes.

    This is no problem. Once the subscribers show up and want the old feeds, the old change trace is available. They get with their subscription the mirror feed, the umbrella feed, change traces, and all the time increments associated with previous searches, content updates.

    New subscribers do not get access to other reader’s reviews. This is private, unless readers specifically approve that this information be both collected and released. This has to be a knowing choice in that readers must know that if they approve, anyone could potentially get their information about what they were looking at and when.

    The default option would be not to release this information unless the reader was fully aware of the potential use of this data.

    Ideally, there would be a method to dump the data at regular intervals so that archiving does not become an issue and outside parties are not allowed to access information.

    Ideally, the code could only be accessed through a wifi-system of injecting commands and using a contained-conductor to create the code from building, and not actually injecting code across the firewalls.

  • Mirror feeds are also useful in that the content provider will know when a reader has an upcoming deadline or event.

    This mirror feed could be integrated with a reader’s schedule or plan and identify for the content provider how to tailor information to assist with the reader’s specific objective, if not directly related to the original search request or requirement.

    Again, the goal is to provide novel content in a timely and useful way.

  • Mirror feeds can be integrated with other functions, products, took, searches, feedback, I information, platform capabilities.

    They are a feed that can be used as a tool, not simply information for the end-user to consume ad nausium.

    Content providers may want to highlight new features differently. If they know a specific reader prefers a given format, then the mirror feed could be created and provided in a way that meets the end-reader’s demands both in form, substance, and background detail and supporting links and material

    The mirror feed could be tailored knowing how different consumers and readers focus on specifics, their priorities, and their level of oversight. Again, the mirror feed would have all the data; the issue would be then to give the reader the option how they would access this information.

    In those cases where the content provider knows that a given level of management or oversight has certain requirements for oversight and responsibilities, and these functions are unique to that particular position, the spot-feed could be tailored to a single end-reader with a specific function.

    When the reader changes position, the data could then be reconfigured to highlight the same data, but with a different slant given the new responsibilities.

    Mirror feeds help publishers answer the following questions

  • How can we use a wiki-like structure on a feed?

  • How can we provide readers with a trace of what changed, added, deleted, or remains the same since [a] the last revision and [b] the last review?

  • What method will we use to let the aggregators know that a change has occurred on one page in the deep archives?

  • How will we let our subscribers-readers and other review this content that a single page has been updated?

    Ideally the mirror feed architecture would be tied with not only FeedBurner but any other method of burning a feed.

    This tool would ensure that a single page update worked with any of the other systems supporting feed.

    The system would be developed based on a clear understanding of [a] what method is currently used to burn a feed with a single page; and [b] how an single page is updated, reported, and published in the feed without a ping.

    Demonstration Criteria: System Testing


    These goals need to be translated into measurable quantities that can be incrementally examined during post-development testing, just prior to any formal release.:

    The goal of this system would be to:

  • Send summary information;

  • Send an alert in bit-torrent way that there has been a change; and

  • Support requests for more details.

    The system would auto-send the current-links to content first. Those with the highest probably of having interest would be linked with what routes the bots, data, and logic trails the aggregator takes when choosing between equally valid approaches, options, and tasks.

    System Goals


    The system will successfully demonstrate how each page repots changes with publishing, non-publishing, pings, and non-pings.

    The system will show that

  • it can attach an alert message with a URL of the current status;
  • the receiver would know to check the last reference;
  • there was a valid comparison trace between old and new content;
  • readers could have a method to see what was changed, added, deleted, remains the same; and
  • the change-trace can be moved based on either content-provider change-stamps, or read-review changes stamps either incrementally or in batches of arbitrary amounts.


    Discussion


    The problem with the current system is that even though there is a content change and update, the aggregators may not necessarily find the update, nor report the feed as having new content. Clearly, there is no new content, it’s just an update.

    Aggregators and readers are not on the same wavelength. Aggregators focus on links and feeds [production-support, pushing], while readers focus on content [consumption, pulling].

    The reader wants to know if there’s a change. The user doesn’t care about the links, or what is going on with the publishers’ feed. Again, the reader-user is looking for information; while the aggregator simply focuses on new content and feeds on top of that.

    As much as I hear the aggregators say that they support the users, I see the opposite. I continue to see a system that is designed more with the publication-push in mind, not with the reader-subscriber.

    XML apparently puts slightly more emphasis on content providers than consumers. Which leads us back to the original problem. Why is there not a system that reliability supports content producer’s changes.

    In this case, I can think of no financial benefit to putting the needs of the source of the information behind that of the subscriber.

    Rather, what I detect is a subtle emergence of something that may not have been considered: How do we really handle these deep archive changes for a very small number of subscribers. It may be a subtle nuance today, but it will grow exponentially as the number of publishers, readers, and archived pages grow.

    Indeed, what’s likely is that there’s already a system out there that already does all this. The trick will be to identify those users who want this function, and are willing to pay for it. As feedback for the XML marketing-types, if this product exists I’m not seeing it showcased as a product feature.

    The other thing I’ve been noticing with such a proposed system is that the aggregator-to-content-on-platform comparison would be one that is focused on each reader. This is to say that if we were to go to such a system that uses an aggregator to ping-content, then each aggregator is simply acting as a bot, thereby defeating the purpose of having a PingOMatic that hopes to reduce bot traffic.

    This is another way of saying that the proposed solution to the single-page-update problem could involve solutions that make a bigger mess than had we simply stayed with with the bots and never created PingOMatic.

    Thus, the solution would be to make this single-page-comparison centralized like PingOMatic, but be done in a way that complements what is currently going on, rather than re-invents the proverbial Fred Flintstone wheel.

    Functionally, what could happen is that the users would access the comparison if they select the feed-spot-update.

    If we had this system [whereby reader-users received a change to deeply buried content], what would signal the users that there’s an update to the spot-feed [even if the aggregators do not report those changes unless there is a ping]?

    The problem is that we potentially move into a duplicate system requiring extra levels of comparison. Technically it is feasible; the question remains whether it is efficient. Or whether there is an alternate way to solve these problems:

  • Deeply buried feeds;
  • Updates to the reader-subscriber; and
  • An emerging need for a workaround to the current problem with PingOMatic [with respect to single page updates and deeply archived content]

    Clearly, the short answer is to have each person independently submit each page so the bots can read them. But why do something that takes time, when the goal of these systems is to save time and automate the data transmission. Hello wooden shingles and lead pencils.

    Right now, users have to ping with each update.

  • Is it too much trouble to require them to submit each page when it is updated; or should there be an auto-discovery mode?

  • Or could there be a platform-based approach that centrally records all pages that are changed, then automatically attaches these with the ping-burst, and the PingOMatic is updated to manage these sub-changes?

    I prefer the latter: The streamlined, automated, and transparent mode. Remember, we’re only talking about 7M blogs.

    What’s going to happen in a year, 10 years with multiple pages? This is going to grow exponentially. There needs to be a way to manage the likely changes, and still keep a trace of the changes, all the while ensuring that deeply buried information is updated, reported, and interested readers are given visibility to what has been changed, updated, deleted, or modified.

    Right now, we don’t have that. Or if it exists, I’m not seeing it. And it’s not working. Hello Enterprise, large-E!

    Commands


    Users should be able to say:

    [do/not do] {auto-include updates in the ping-package for their sub-pages in their blog-platform}
    At the same time, there needs to be a way for the bots that are linked with the aggregators and FeedMesh to discover this content, track the changes, and report to the aggregators the information needed to create the mirror feed: The one that is finely polished.

    Users should be able to say:

  • Where they want to add the information.
  • Which folder they want the updates to go.
  • Whether they want the information broken out into a different file, listed as a separate feed, or integrated with the original comments, or displayed in some other manner.

    Users might want to have the subsequent changes integrated with the comment feeds; or auto-assigned to a folder that is simply updates. Either way, the user will not necessarily have a single way of stating how they want these subsequent changes reported and filed in the aggregator.

    More about this Mirror Feed

    Think of the mirror feed as something that is an exclusive list. It’s like a comment feed. It’s above and beyond what the normal site feed is.

    But this mirror feed is something special. It’s a select subscription that is tailored to known readers, interested readers, and people who have already expressed a desire to have updates on each WebPages, and they are interested in a very narrow area.

    The mirror feed can be custom-designed for a special group of readers. And content that is on the mirror feed may be deliberately buried or not published for wider consumption.

    Why? Everyone else has clearly indicated they’re not interested. So why bother announcing to the world [via pings] something they don’t want?

    Clearly, one approach is to create a new feed, add it to FeedBurner, and be off with it. But this doesn’t really address the issue of re-integration. It would be nice if there was a way to have the main feed aligned with these comment-content-page-updates, but not require the publishers to create a special feed for each page, and then have a new listing in FeedBurner.

    If we break the feed off from the main site, this integration function will be lost, or require a re-integration command. One approach is to simply design the feeds to be integrated when they are created; as opposed to designing something that recombines something that didn’t necessarily have to be segregated to begin with.

    Again, we’re not creating new feed-accounts. We’re simply adding a special spot-feed because we want to have timely updates, and we also want this update to fall under the umbrella of the main site.

    This is where the umbrella feed comes in. Our main site feed would have overall control of the umbrella; while the specific-page-feeds would also fall under that.

    Imagine having a FeedBurner account where you have one set of statistics. Then for each page, and its update, you had a greater level of detail, showing how each page-spot-feed was getting attention.

    Mirror feed updates

    What these new feeds will have to be able to do is to have a signal to look at the prior events that occurred that would prepare the system to provide updates.

    Think about this. When we’re tracking aircraft, we don’t rely on the sonar. We use radar, which sends out a signal, that signal interacts with the aircraft, and the aircraft underbelly, wings, and cross-section interacts with that signal, and then the object acts as a transmitter and says here I am. Sonar is different. It relies on a bounce.

    What these feeds need to do is move away from sonar, and start noticing what is going on with radar. There are things that are occurring on the platforms that are not related to pings, but still provide valuable information related to the impending the changes.

    By this I mean that the aggregators and confirmed subscribers could benefit by focusing on what prepatory commands are occurring that would suggest a change to a webpage is about to happen.

    Again, under this model, we’re focusing on getting information to the reader. The goal of this would be to ensure that the updates are timely given.

    If we can’t focus on actual changes [because the aggregator is too busy; or we don’t want to send bots], then the signals prior to the change need to be the prepatory or alerts signals for the aggregator [that-has-a-confirmed-reader] that something of interest to the customer is probably on the way.

    So, in anticipation of something about to happen, the bots would then be deployed to work on other tasks; that way, when the actual command shows up, the system support can then be stop the other tasks, get refocused, and allocated to delivering this highly prized content: A single-page from many years ago as an update for a single subscriber.

    This could be a server with a multi-plexer. The question is whether XML and aggregators want to move from a simple system to something that leans in a greater direction toward this type of task-switching in an aggregator platform. If you’re able to do trillions of calculations per second, it may not make much difference.

    But think about 10 years from now. More blogs. More XML feeds. And more archives. Where the number of changes in higher.

    Think about the back-log issues of aggregators. Would it be smart now to design a system that is prepared to handle this expected backlog? When the NYSE designed its upgrade it wasn’t looking at the number of shares traded when the market first opened and they were making scratch marks on wood shingles.

    No, the system was designed with the likely demand loads. Today, PingOMatic doesn’t appear to support these types of retroactive changes to a single, archived page. It remains to be understood whether FeedMesh will support these focused-archived changes.

    The aggregator-designers may have this in mind. But I have yet to understand whether the XML community as a whole is consciously designing aggregators to have a specific task-management function with the old-content-being-updated-and-pinged-with-bots paradigm in mind. It's clear that the current platforms do not.

    Given that the workload, changes, and volume of information is growing faster than expected, it is likely that the potential challenge for aggregators, archives, pings, and tailored feeds will fall off the radar. That is if it ever gets on there.

    But that doesn’t make the likely problem disappear. Just that more annoying when the world realizes what could have been done earlier.

    Getting back to the radar-vs-sonar question. The issue for users will be how to rely on the aggregators to signal that there is a change in content without a direct ping; and uses something that focuses on prepatory commands within the content-providers platform. The idea is that we don’t rely on a ping to notify of a change; but we rely on the external platform’s reaction to a prepatory command as the signaling event.

    This, as opposed to the current system that:

  • waits passively for a ping that only looks at the surface;
  • does nothing to integrate the comment feeds with the specific original blog; and
  • fails to reliability report a single page update from the archives.

    Possibly one solution is to permit greater data exchange between the content provider and subscriber. Perhaps one solution is for the content provider to signal the FeedMesh anytime that an old file is accessed; and then a final check when it is closed.

    Perhaps the solution would be to package these changes at the content provider’s location. In turn, this information would be sent as a single stream when the user is signaling:

  • that they are about to leave; or
  • it is cached [like Excel does if the system freezes up, and it saves your latest version].

    Then the system would automatically send the content changes to the spot-feed next time the user signs in and is doing work on their blog or publishing system.

    Ideally, this would be something that is transparent, but something that the user, publisher, reader, and XML platforms make a conscious decision about. Again, whether the benefits outweigh the workload remains to be articulated.

    What can be said in a general sense is that when the reader subscribes to a given feed, they should also have that subscription interpreted by the platform to mean the reader wants, among other things:

  • updates;
  • to be told of updates,
  • a notice that the updates are available;
  • the updates to reliably report where they want them displayed
  • the option to ignore the original version and stick with the final version
  • their aggregator to display the difference between the original content and the changes;
  • their aggregator to do what it has to work with content publishers to make sure the content provider can get content to the reader [to include data sharing, special access privileges];
  • the system to put [this-URI-in-this-aggregator] on the ping list that gets notified when there are future publications of content changes in the deep archives.

    These commands would exist and be maintained along the mirror feed, and continue for perpetuity until the feed was canceled or deleted. Remember, there are people who may choose to pass down their feeds in their estate as intellectual property.

    Enterprise: If it was obvious it would not be patentable

    The enterprise in all this is to create a system that is independent, can work with and integrate with the existing and likely tools, and is also one that is robust enough to survive crashes, and still maintain its flexibility to adapt to the upcoming changes in volume, features, and user demand.

  • What changes do we anticipate occurring in the XML platforms?
  • What features, systems, and designs can we develop today that will accommodate these expected changes?

    Single page update


    It’s interesting to run a simulation of the above features related to a single-page update. To make a long story short there’s an opportunity to create some one-step functions that we saw at Amazon.

    Let’s go through some of the steps an end-user has to go through to actually do a manual one-page update; and then review content.

    First, remember that users are currently unable to rely on the webpage platforms to deliver updates through Newsgator. There’s no visible ping coming through. Also Bloglines does connect to the feed, but it too doesn’t actually display the updates as an increment.

    Requirements


    What’s needed is a system that does the following:

  • User enters published item [signal for reader-aggregator that a change is possible]
  • User makes changes [prepatory command]
  • Platform detects changes saved [publish command]
  • Batch file consolidated of changes, cached on publisher platform
  • User closes out site [prepatory signal to have the content reported to the subscriber]
  • Batch-file updates are forwarded to the specific subscribers.


  • Today, we don’t have this. What actually happens is the following, and the revised system needs to fix these problems:

    First, users when they save content are not given a ready-link to create a single feed.

    Rather, they have to go to a separate site to create a spot-feed, then re-enter this spot-feed manually. This is only possible after publishing the spot, grabbing the URL, then throwing it back into the content. This is a single platform. I should be able to know what my exact URL and URI will be before I publish, and have this automatically loaded when I signal that I want a link to create a spot-feed.

    Second, there’s no quick way to inject that newly-crated-spot feed into the spot. They have to manually open the feed.

    We’ve got robots that can take java code from an external platform and create a new comment-protocol that is integrated. I see no reason why the same type of approach could not be applied. Indeed, since this is my platform there’s no reason why it can’t happen: I don’t have to give myself permission to inject a special code on my platform.

    Third, there’s no quick way to document changes, link them with time, or ensure the administrative tasks are organized.

    This is problematic. These multiple tasks that could be automated and integrated into a single step, just as we have seen on Amazon. Currently, users have to manually:

  • flag the content,
  • inject a date time stamp by referencing an external feed;
  • assign a tag to that content,
  • create a schema to integrate this spot-feed-tag with similar tags;
  • advertise this new tag; and
  • ensure that specific content-consumers are aware of the changes.

    Fourth, the tagging mechanism is not integrated with these special updates.

    Who would’ve though that we’d have code and content updates occurring faster than the tagging protocols could keep up with. I’m not sure what’s getting developed faster: New aggregators to integrate tags, or new tags to chase aggregators. It’s all so confusing!

    Fifth, there’s a problem with the new URLs. For some reason the aggregators do not appear to handle some URLs, while other platforms can handle the extended URLs. What’s needed is a mechanism to shorten the URI to something that is manageable, but also linked to both the original URI, and at the same time is an umbrella to the original feed, and something the existing readers are going to recognize.

    At the same time these shortened-URIs need to be integrated and support the likely code-matching function required under the two-way search engine.

    I see cache. An archiving function. And a system of code-matching. This calls for a computer, Fred Flintstone!

    Summary Requirements


    Ideal: Let’s review the user requirements

    A system that:

  • Reports content changes to single pages along a feeds;

  • Provides updates and reports changes to the aggregators;

  • Allows easier data flows between confirmed subscribers and content providers; and

  • Allows for content changes to be delivered regardless how deep or old they are in the archive


  • Summary Goals


    Requirements: Let’s define the system goals and final architecture

  • A single page update feature In PingOMatic

  • A deliver-update based on publishing content changes


  • XML Concepts: Enabling technology required to accomplish these feats


    Auto-discovery of content updates


    Aggregators should, when reader-subscribers open the platform, be on the lookout for content changes since last review.

    Aggregator bot with multiplexer

    The aggregator would interface with the publishers to ensure that the reader-subscriber-aggregator is focusing on tasks that would otherwise get overshadowed once the content changes are reported.

    Prepatory command detection on content providers’ platform status

    When a content-provider opens archived content, a publisher’s system will alert existing subscribers’ aggregators that a potential changes are on the way.

    Publish-notify concept

    Once the archived document is re-published, the publisher’s system will send a ping to the existing feed-page-content-subscribers.

    Workaround to the PingOmatic to allow lower level updates

    This is a method to include single-page feed-updates underneath the site-feed-umbrella.

    Tagging updates in the XML servers and search tools

    This takes the existing content as reported to the search platforms like Technorati, and ensure that appropriate tags are also indexed when the content is updated.

    Support for the deep archiving and updating of old content in searches, access, updates

    There needs to be a method that ensures the archived content is searchable; that updates are available for both searchers and existing spot-feed subscribers; and that when content changes in archived content that the spot-feeds are updated and reader-subscribers get notified of these changes either in summary or in detailed form.

    Aggregators that have mirror feeds, can create, and support mirror feeds

    These mirror feeds are polished, integrated, and show the summary version; and the feeds are umbrellas over the raw feeds which show the detailed changes, and the sub-page feeds.

    Coding: Modules to accomplish the tasks

    Let's go through some of the modules needed to accomplish the above system goals, and implement these XML Concepts.

    Module 1: Identify existing subscribers to current feeds

    This module would ensure that the subscribers to current feeds are known, correct, and that there is a link between what the content-readers want and what they are actually getting.

    This module would do checks on parallel feeds to make sure that the content-changes that are getting reported match the other channels that content-changes are also available.

    The module would do sampling on subscriptions from different subscribers to ensure that the content changes given to one subscriber match those given to another subscriber of the same content feed. Where there are differences, these trends are noted and a bot would then be deployed to investigate the nature of the differences.

    Module 2: Publish-notify

    This module would act as an interface between the publisher’s platform and the reader-aggregator’s notifications. The bot would get prepared when the archived content is accessed; and the bot would be activated once the content is republished.

    Once the content was republished, the bot would go along the existing channel between the content-provider and the existing subscriber to ensure that the spot-feed was updated. This would be independent of any site-feed-update.

    Module 3: Lower threshold alert on aggregator

    Once the subscriber confirms they want this URI, the publisher and reader then have lower thresholds to exchange data along an XML feed. The reader no longer has to rely on a bot to get notifications of a single feed; and the publisher no longer has to ask for permission to send updates.

    Rather, once the reader subscribes to the spot feed, the reader then gives permission for the bot to use the XML feed to make these notifications without a ping.

    Module 4: Receive commands from publishers for aggregator time management

    Once the aggregator receives notice that the archived content has been accessed, the aggregator would shift focus to other tasks.

    Once the publish-command was issued, the aggregator would then refocus on the incoming data to ensure that the content was adequately showcased, the links were valid, and the content updates were correctly reported, and the reader was correctly notified of the change in content status.

    Module 5: Report content changes without ping; publish to existing subscribers

    After executing the publish-command, the publisher’s system would send a ping automatically to the confirmed subscribers.

    Module 6: Report mirror feed

    The mirror feed is the polished version of the content feed. The mirror feed includes all changes, a trace on content changes, and provides a system for readers to trace the content changes both forward and backward in time.

    The system would correctly change the content and maintain the polished feed. There would be seamless flow of information from the page update commands.

    Module 7: Monitor umbrella feed

    This module would report umbrella feed, with sub feeds; correctly report content, and link content changes to original version; and ensure that content is displayed how users want it.

    The module would ensure that the most current mirror feed correctly placed the content changes in the correct spot-feed, and that the order of the information was correct and consistent with the time-stamps.

    This module would make sure that the subfeeds were correctly assigned to the correct umbrella feed; that the tags and links between the sub feeds and umbrella feeds were consistent; and that there was a valid method for ensuring the assigned tags to either feed also applied to the other feed.

    Module 8: Check module

    This module’s purpose is to ensure the subscription request is accurately matched with the content. These involves data checking and updates, and cross channel sampling to ensure multiple channels with similar-same feeds are reporting the same content.

    This module would ensure that the umbrella feed as displayed actually matches the umbrella feed that is forecasted. There would be sampling within this module.

    The module would compare the actual umbrella feed with the forecasted feed. There would be sampling on the time-stamps to ensure that the actual feeds match the expected order and flow. Deviations would be reported.

    Module 9: Reconciliation module

    This module would look at what was specifically not working correctly in the outputs. This module ensures that differences are investigated and resolved. Some problems might be:

  • Coding problem;
  • Bugs in one of the platform;
  • Time-stamps were not working; or
  • Mirror feed content-order was not consistent with the timestamps.

    This module includes an auto-notify of summary reports to corporate board and likely impacts to schedule changes and funding requirements.

    Module 10: Simulation

    This module would do stress testing on the platform. This module looks at the change in number of pages that are archived, and forecasts the likely workloads and ping requirements.

    This module shows the number of changes, the data management requirements, and the hardware likely required to support.

    The goal of this module is to identify trend lines where growth rates at various exponential rates would outstrip available supplies.

    This module can act as a catalyst to communicate new innovation needed; and act as a catalyst for other discussion on whether the baseline assumptions need to be rethought; or whether new technology would better manage the system.

    The simulation module also identifies defects in current standards, and works with academia to act as an incentive to develop easier methods to accomplish tasks, or develop new coding alternatives to XML, or get refinements in the standards to update and get a more reasonable solution given the simulated demand loads.

    Summary

    You have a roadmap to create a system that supports spotfeeds and can report chanages to deeply archived data. Good luck!
  • Feed users need tools that more effectively [a] report content changes to deeply archived data; and [b] provide change notifications on single-spot-feeds.

    Content remains valuable so long as the readers are able to access that information. Subscriptions and content providers become more valuable when [a] readers are able to get timely updates of changes; and [b] searchers can get tailored responses and custom-content updates.

    Definitions

    Spot feed: This is a feed assigned to a single, specific page. The feed only reports changes and content on a single URL.

    Mirror feed: This is a polished version of a feed. Includes all current changes. It is a seamless flow of data to include all comments and revisions to a feed. This feed includes links to the detailed changes which are buried, but available.

    Summary

    We outline a user’s problem using a spot-feed to monitor single-page content-changes. Creating and monitoring a single-spot-feed is cumbersome and could be streamlined.

    We also discuss the problem with updating deeply archived feeds and reporting those changes to the reader. There is room for improvement.

    There are several approaches to more effectively report content changes to deeply buried archived information. Some approaches involve adjusting the ping platforms.

    Other approaches adjust the system of pings and bots. More complicated versions involve integrating the reader and publisher platforms.


    LEGAL NOTICE


    Creative Commons License

    This work is licensed under a Creative Commons License.

    You may not copy any of this work to promote a commercial product on any site or medium in the universe.

    If you see this work posted on a commercial site, it violates the creative commons license; and the author does not endorse the commercial product.

    Free to use for non-commercial uses. Link to this original blogspot and cite as .



    Introduction


    I’ve been looking at putting feeds on single-pages, or making spot-feeds. It seems as though things could be a little easier than they are.

    I’ve found a problem with both making single-page updates and getting single-page changes to report through a feed to the aggregators.

    I wanted explore some options to update deeply archived single pages and suggest some methods to make sure these updates were reported through the aggregators.

    This blog spot discusses the benefits of using single-page feeds; the current difficulties with working with single-page feeds; the challenge of getting single-page feeds to report to the aggregator.

    I discuss several options to address this issue and outline a number of skeleton modules that might assist developers is seeing the details of what I am proposing. I also conclude with a brief outline of a test plan that could be used as a starting point to evaluate the final platform.


    Enterprise brainstorming

  • What happens when the pings aren’t able to get the content, do we go back to bots?

  • What if there was a way to link the publish-command with an automatic-link with the subscriber?

  • How do we create a system that identifies content changes to a single buried page to those who have already expressed an interest in the original content?

  • How do we make sure that the feed reader-subscriber is given notice of a change in content, even though the current ping system doesn’t provide information on deep content changes in the archive?

  • What is physically going to prompt the reader to review information in the feed that is related only to content changes when the ping-system and aggregators neither report nor display subtle changes in content?


  • XML Single Page Update

    Targeted updates to subscribers


    The thing about feeds is that you can get a lot of content. What happens when you find a specific page you want to monitor?

    Right now, the XML support tools are geared toward providing updates on new feeds and content. They’re not geared toward specific pages and minor changes.

    Let’s say you’re running a site. You want to report changes and updates. You can add additional information. Just as long as your current information is at the top. And you ping.

    Small problem. What if you don’t work that way. What if you provide running updates at the bottom. And you are piling on new content and information in your feed.

    Who’s going to find your updates? The ping services don’t. You have to use special pings. Set up special feeds.

    There could be a way. What could be done is include an option in PingOMatic. This would let users link a specific page with a specific feed.

    PingOMatic could be enhanced to include a separate box allowing all specified pages to be repinged and reported to the aggregators. Or the services could automatically provide a batch list of URLs to be reviewed.

    Users could have the option to update their entire feed. Or they could have the option to include specific pages within that feed to get repinged. It would be seamless.There are many ways to do make this more complicated than it needs to be.

    What blog publishers would like is a way to have a single page with running commentary on a very focused topic. Rather than require multiple blog-spots with multiple links. It would be far easier to direct readers to a single page with the updates.

    For example, if I have a running commentary on a specific platform I am monitoring, the entire world doesn’t need the ping of an update. The only one who really might be interested is possibly the CTO.

    So why ping the world, especially when, at most, there might be one interested subscriber to that single content-page.

  • Why would someone want to have a spot feed and not just a blog feed?

    Spot feeds need to be distinguished from blog or site feeds. A spot feed would be something that is part of the main feed, but it is focused on a single page. A spot feed would be needed if we are updating a change page for a specific audience.

    The blogosphere as a whole may not necessarily follow the spot by subscribing to the changes. Rather, only a very narrow audience of interested readers would pay any attention. These specific readers might want to have a simple way to subscribe not just to a feed, but to a running-commentary within a single blog-spot.

    In this case, a spot-feed is created because we have known subscribers who want the original feed or content, and the end-readers are interested in a very specific cut of that information.

    For example, if someone from Newsgator comes across a discussion of aggregators, it’s a distraction if the spot-content changes are tailored to Rojo. In turn, the world doesn’t necessarily need to know the minor details Rojo status, but it is still good for the world to know that someone has raised the issue, saving multiple submissions.

    A spot feed could also be useful if we are preparing a confidential feed within a special blog, but want to maintain the overall blog or feed. We might want to change the access, publicity, and comments about a specific spot or blog-entry without calling attention to the entire feed.

    Blogspots can sometimes serve as a platform to have a running tally of data on a very narrow event, discussion and tailored to a very specific topic or group.

    Remember, not all feeds need to have separate channels. It might be nice to have a spot-feed when we are making updates to a very special page that has a high priority, but a very limited number of users.

    The current problem with single-page or spot-feeds is that they still generally require pinging for the content to get reported to the final aggregator; but this is of little help when the single page is buried and the PingOMatic-services never find it.

    A URI can be defined and localized to not only the publisher, but also the content-reader. Spot feeds could build off this relationship; focus the content updates to those who are most likely to be interested in the content.

    Spot feeds could be more than something that the content provider/publisher generates; readers could also create spot-feeds to monitor the ongoing changes in a specific page, within an overall feed.

    Spot feed benefits

    Reduce the number of site pings. If the likely reader-target-audience is very narrow, there’s no reason to ping all the services. Only the subscribing audience would get specifically notified of the update. Indeed, everyone would still get notified that the page contained X-content.

    Reduce the broad-pings to aggregators. In those cases where there are just a few subscribers to a spot feed, it might be more effective to simply target the individual reader with a ping, rather than go after all the services.

    Pings vs. Bots

    One reason for PingOMatic


    The idea behind bots was to auto-find things. But then someone came up with the idea of a ping so that the bots didn’t have to go around looking at the same unchanged pages.

    However, there’s also no need to flood the services with change-pings when the number of interested readers is much smaller.

    There could be a reordering of the pings, or a system that integrated the pings with bots so that there was a balancing.

    Content providers are interested in knowing who may be looking for the content, and then targeting that content in future updates. Content providers creating changes have different priorities, and want:

  • Current links to original content

  • Aggregators listing the original link

  • IP sites that have looked at content before changes

  • Those sites having issued search commands for content that is new or added

    What’s needed

    A system that will recognize that content-consumers need specialized tools that can deliver them focused change page updates; and also timely notify them when these changes are available.

    Changes

    Integrated with the original feed


    In the old days of blogging, users were directed to the exact words with strikeouts. I’m not sure if that needs to continue. Last thing I want to read is someone’s hen scratching. If I’m reading a blog update, I want it in camera-ready format.

    If I’m interested in the edits, I’ll ask and I’m sure there’s a way to have them delivered.

    What I’d really like is a system that gives me the final picture of any webpage that gets updated, not matter how deep it is buried. And, later if I’m interested, I’d like to have my aggregator give me the trace on what the original content was and how things have changed.

    The trick will be to ensure that the aggregator is doing exactly that: Not simply aggregating many feeds into one; but aggregating the various versions of a specific blog-spot into a combined update.

    Call it what you want. I call it a mirror feed. It’s not the actual feed, but it’s the polished version of the original feed, and it has the latest and greatest of everything.

    At a second level are the document traces on what the specific words that were changed, repurposed, edited, and which links got updated or changed.

    The current problem with the feed-aggregator-update-ping-search tool system is that the buried feed-content doesn’t easily get identified and updated. It might, but it’s not working on what I can see in the online versions of Newsgator and Bloglines.

    Who wants to read in their aggregator the ongoing minor changes related to some obscure item? That’s fluff data. It’s not been organized into a final product. It may be relevant to the analysis, but when I’m looking at someone’s feed I don’t want to have to put all the pieces together.

    I want the aggregator to aggregate: Aggregate the pieces and combine the small blogs into a final coherent form. Whether the aggregator automatically does this or the publisher does it I personally don’t care.

    So how are we doing to do this? What’s needed is an easier, proven way to convert a single webpage into and XML feed; and then have a simple update on that single page report to the aggregator.


    Summary test results


    I’ve done some testing. I’m not seeing that the single-page updates are working. Rather, what I see is a system that is focused on channels, feeds, and subscriptions.

    It’s not a system that is focused on ensuring that detailed information is updated; nor that buried content is aggregated into a coherent form.

    Think of a word processor. Remember those days when you could look at a document and have no clue what changed? That’s right. Now we have something called wikis.

    The middle-version was having a document change long which showed the red-lined version of the document. So you could see the exact changes: Who made the changes; when; what got changed, added, deleted, or stayed the same.
    Changed: Blue
    Added: Green
    Deleted: Red
    Stayed the same: No color.
    Guess what? If you have an aggregator that doesn’t display colors, you’re out of luck. Can’t see the various versions. Doesn’t make sense to have a protocol based on colors when the platforms don’t support colors.

    What’s it going to be? Let the changes drive the aggregator; or come up with a new way to display change-traces without using colors?

    Wow: Strikeout -- That’s a fun command. Show me stuff that changes. How about burying that? And please don’t make me write codes to do that.

    Hay! Let’s have an aggregator that does all this. You know, takes the new content and reformats it.

    Wow. An aggregator that actually integrates with the blogger-wordprocessing tools. But does more than let me blog and publish; but actually aggregates content into camera ready products.

    This tool is something that integrates aggregators with wordprocessing updates. Think of it as a mechanism to actually provide an update to the readers that’s quality, integrated, and something that is focused.

    With this tool you can look at a single change-page related to a focused product, and have a running tally. Think of it as your private wiki. But it’s something that can be injected into a feed.

    It’s a mirror of what you already published. The tool allows traceability.

    Think of an umbrella. Under that umbrella you have a main handle. That is your main feed. And also, going into that umbrella are the needed supports to that main handle and umbrella.

    The umbrella feed is your mirror feed. It includes the main camera-ready spruced up version of your feed, plus all the minor details, changes and updates. It’s all related to the content.

    It needs to be packaged together. And the services need to be ready to support this expanded capability and feature.

    There are a number of ways to actually accomplish this feat of magic. One is having an integrated system that pings the site.

    Here’s the problem. When a publisher provides a single-page update in content, but that content is buried, there’s no way to go back and specifically target both the FeedMesh and PingOmatic of that single wegpage update. Yet.

    Who suffers? Searchers may not get access to the details.

    If Technorati has taken a snapshot of your page, but allows no further updates to buried content [because the PingOMatic only looks at the current page], then reader-searchers are suffering. They’ve got a search with specific words, but they can’t find your content.

    That is unless you use this other search engine-tool that is not integrated.

    Why not bring it all together?

    The idea of pings was to reduce the number of bots running around. The idea was that publishers would know what was going on with their feed. The aggregators would get notified. And the people that subscribe to your feed would get alerted.

    Now you have a lot of feed subscribers. But the number of subscribers to a single page might be substantially smaller. What if there was a way to tailor the update so that not everyone was notified of the page, just those who had expressed an interested in that page, the content, or have communicated through the search engine they are looking for something.

    There’s no reason to ping the services when they don’t take a ping from buried content. Just as there’s no reason to ping an aggregator where none of the subscribers read your content.

    The answer is to have a focused ping. Something that will ping just your readers.
    The problem occurs when you’ve got an update, but the system doesn’t report that change: Either as new content in the feed; or as an increment or notification in your aggregator that something has changed and you might want to look at it.

    Sure, if you go to the content and link to it, the feed says it’s updated. There’s new content. But there’s nothing showing up in the aggregator that a particular page has been updated with buried content.

    The user has to know to go back and check the page for updates. But that defeats the purpose of having XML.

    Known information

    Consider what we know:

  • A. The subscriber has already communicated they are interested in the content.

  • B. The URL and URI are of interest to a particular aggregator and subscriber.

  • C. The subscriber relies on pings to get notified of changes.

  • D. Although not all feeds require a ping to report content*, page-updates get buried.

    [* = Aside, surprising test result: Atom feeds from blogger still report changes to Newsgator without a ping through PingOMatic]

    More specifically, there is data that can be used to tailor the pings to [a] publication and [b] updates of deeply archived content. We have known, traceable information, such as:

  • identified, confirmed, and interested subscribers;
  • a confirmed and working feed;
  • a channel between the content provider and interested subscriber;
  • an active aggregator that continues to showcase your content for the reader;
  • your original URL;
  • your feed URI;
  • a ping-back when the reader subscribes;
  • a confirmed number of hits and time on feed; and
  • data on the number of times they followed your feed back to your original platform.


  • There’s an existing relationship. A confirmed channel. And information is already going back and forth. Why do we need a ping; couldn’t we rely on something less than a ping to notify the aggregator-subscriber of an update in buried content?

    Sure. We could lower the threshold to something less than a ping. Something that looked at what the publisher was doing, independent of the ping, and then stand ready to catch and report that update on a buried page as it is published, not pinged.

    It would be nice if there was a method to have a lower threshold. So that further page updates did not require pings. Rather, once we have a confirmed-interested reader in that content-page, then the auto-commands should no longer have to go through PingOMatic or the FeedMesh.

    Rather, what would be nice is once that publisher already has a confirmed reader, that the reader’s aggregator would then go to a different monitoring mode. One that simply detects a simple publish-command. The aggregator already has the link; the content-reader is already communicating.

    We know that when we publish content that that content will show up, even though the formal tools like FeedBurner will not get it.

    Problem with single-page updates: Updates not easily available

    If I update my blog, the XML feed will report when I publish. What doesn’t happen is that the services are not notified. But even though Newsgator doesn’t get an official ping, guess what: My blogger-xml feed still shows and update on the atom-feed, and that update shows up.

    Only once. And this also occurs in Bloglines. To get an additional update, I have to go back into the feed, click on it, and get the information. Again, that defeats the purpose of XML feeds: Why go back to the original content to get the information we’re supposed to be able to have delivered.

    Even through the feed is connected, and there’s a clear-clean flow of information between the original publishers, along the XML feed, along the URI, into the link, there’s nothing telling me that there’s an update. I don’t see an increment in the aggregator.

    But there’s nothing telling me that the feed has been updated. I have to go check. This is the same as going to the website. Which is the same as saying despite all the XML-stuff, it doesn’t work in all cases.

    What to do? There are a number of options.

    Single page feed support


    What’s needed is an online tool that both:

  • A. Reliably converts single webpages to an XML-feed; and

  • B. Ensures that that feed URI can be easily loaded to an aggregator that reports content.

    Also, the PingOMatic needs to be adjusted so that single-page-updates are incorporated under a feed.



  • Easier monitoring


    Third, aggregators need to have thresholds allowing users to have a higher-monitoring of those fees that are already subscribed. A buried content-update in an ancient-blog should get reported more easily than a new feed from an unproven, non-subscribed feed.


    Aggregating changes

    Fourth, the aggregators need to have an aggregation function: To show and aggregate the updates and revisions between blogupdates. Think of this as combining feeds and word processing updates with wikis, but using an aggregator to show the changes.

    Again, the changes are not as important as the buried content. I want content more than I want to know which changes occurred. Put the emphasis on writing code that will report content changes, and then I might be impressed that XML can support a strikeout code.

    What’s needed is a method to showcase this mirror feed, while having a fast and quick way to get access to the detailed changes. So those who want to look at the strikeouts can really feel like they’re noticing what’s important.

    Publication-based monitoring

    Fifth, what’s needed is a method to downgrade the reporting-requirements-thresholds on the feeds. Move from simply relying on pings, to one that auto-links an update based on publication.

    Again, once a reader has confirmed they want to subscribe to a feed, this should lower the threshold to get updates and make it easier to access, download, view, and got notified of these revisions.

    Use the existing channel

    Sixth, there needs to be a system that links content with aggregators in a more structured way. By this I mean something that is a more solid pipeline between the content providers and the reader. Once the reader confirms they want updates, then the URI-itself should be the mechanism to transfer information about the updates.

    Think of the URI as a portal or gate between the content provider and the reader-user. Once the reader confirms they want content, the aggregator should then say, that they have a more solid link between the reader and the content provider.

    Thus, information should flow more easily back and forth along this. This takes us back to the idea of bots. Pings were designed to replace the bots. But what if the aggregators were linked through the URI to the publication system, and bypassed the ping-updates.

    Remember, we’re talking about a reader-user who has already communicated to the aggregator that they want this content; so there could be a lower threshold on whether a bot is deployed.

    I’m all for cleaning up the web and making things more efficient. But if we have a confirmed link between content provider and a publisher, then why not have as a tradeoff a lower-threshold on whether the content is published-reported as an update in the reader’s aggregator.

    Rather, what’s needed is a system that says, we have a confirmed connection between the content provider and reader; so although we would like to focus on pings and get rid of many bots, let’s now have a bot-deployed when the content provider publishes at any level of the feed.


    Pendulum: Bots vs. Pings

    Imagine a pendulum swinging back and forth between two paradigms: PingOMatic and bots. Perhaps there are special situations when bots would be more appropriate. They can be targeted. They can be deployed without a ping. And they can reliably go after specific things.

    PingOMatic, on the other hand requires, simply, a ping for it to work. The problem is: There is no ping when the service doesn’t support a ping-related-to-archives. And there is no ping when the system cannot drill into the content to detect the changes. Tag: Feed drillers

    When we have an update on a single page, perhaps PingOMatic needs the assistance of some special bots. Maybe, in order to reliability report content changes in the ancient archives, the aggregators and publishers need to have a system that integrates their commands. That way the reader’s aggregator could do some burrowing and save the services from having to support these specialized tunneling pings.

    What could happen is this. The bot would go out to the confirmed subscribers and say I have an update here. This bot could act outside the current system or be an addition.

    What I am proposing is that there be a more effective way to communicate an update of a webpage update that is both buried, and targeted to those who have already expressed an interest.

    Click-on-content auto-reports

    One approach is to take the task out of the aggregator and ping service. Simply make the [comparison between old and current feeds] a function of [the user clicking on the site]
    A user accessing their aggregator could be the signal for the aggregator to compare old and current feeds
    Again, we have known links to specific subscriptions. What could happen is the user would drive the aggregator to find and report these special change pages only when the user activates the aggregator.

    The aggregator, when physically opened, would take that sign-in as a command to look at all the feed-URIs in more detail. The manual-sign-in would be a signal for the aggregator find the most current status and content revisions on the deeply-buried archives.

    The user-triggered-approach could trigger the aggregator to do a comparison between the current feed-version in the aggregator, and the content-updates available outside. This approach would summarize the changes since last review, not since last publication or update.

    In other words, the cycle times between publication changes and content reviews may be out of synch. The aggregator correctly reports publishing changes, but appears to do a poor job of reporting changes in content since the last reader-review.

    The benefit of this approach:

  • The reader-subscriber would get the most up to date information on the content changes
  • There would be no reason for archived pages to get missed
  • PingOMatic would not have to get updated
  • The aggregator would not become a bot, but a focused-feed-monitor

    In short, the benefit of this consumer-triggered approach would be that it would be a real-time ping on the current content status. This ping could be transparent, and part of the sign-in procedure. This would be a demand-driven ping, not the publication-driven ping currently in effect.

    Summary report of changes


    There’s also another consideration. For sake of cache and ease of content management, one alternative to this approach would not to send content, only an alert of changes.

    Again, the readers have already signaled they are interested. Why not save bandwidth by requiring the readers to click on the link—they already want the content, so the link is just a formality.

    Philosophically, this approach is not consistent with the XML approach of providing both links and content to the reader.

    However, it remains to be understood through both simulation and research on the expected content volumes whether there would be any savings or benefits to this approach.

    Publish-based ping


    Another approach to monitoring the changes on a specific page would be a notification based on publishing. Again, this is to be distinguished from notification-based-on pinging.

    In this approach, the details would not be gathered until the user clicks on the window. These methods fall into general categories:


    Summary Approaches


  • 1. A system that is [publish EQUALS notify-subscribers-of-change]

  • 2. A system that allows multiple entries in PingOMatic; moves from a single feed update, to one that allows updates to any page at anytime.

  • 3. A mirror feed that is the polished version of the feed.

  • 4. Auto-searches from the aggregator based on content.

  • 5. An aggregator that displays current feed version with the request for information, not based on a feed-ping.

  • 6. Click-on-content auto-reports.

  • 7. Sign-into-aggregator auto-pings publishers for updates.



  • Commands


    What the user needs to be involved in with the settings is some greater fidelity in the auto-notifications. If the user wants they should have the option to say, I have a confirmed interest in this feed, so I want to have content delivered to me, even if the bots and PingOMatic don’t know about it, can’t find it, and the content is buried.

    The user should be able to say that they:

  • Grant permission to get updates based on non-pings, just publications
  • Do not want to wait for a ping to get an update.
  • Want their system to act as an auto-bot to more closely follow those URIs that have a confirmed subscription, regardless whether or not there is a ping.

    As an analogy, let’s remember why the subscriber is looking at blog-comment feeds. These are special feeds that run parallel to the main comment feed. Ideally, these feeds would be integrated so they form an integrated line of information around topics, blog entries, or other tags.

    Archived and comment feeds are useful to compare. Comment feeds tell us, among other things something about the users.

    Updates


    Users want to be part of the updates. In a similar vein with respect to archived content changes, the user wants to be able to extract data when the new content is published, to link-lock with the URK with the publish command, and get new stuff, changes, and additions.


    How things fit

    Users want to see how the additions relate to the original content. Thus, when we look at archived data, the user wants something that is going to integrate their current changes with the archived versions. The user would want to customize this feed.

    The users would be given the option to format their mirror feed [the finely polished summary feed with all revisions integrated], and the types of information they want [if any] on the raw feed [showing the strikeouts, changes, comparison between earlier and later feed changes].

    Mirror Feeds FAQs


  • Specifics: What is a mirror feed?

    A mirror feed is something that is a polished version of a feed. It is an aggregate of all the changes and comments into a streamlined, highly refined feed. All subsequent changes are updated, incorporated.

    The mirror feed is different from the main site feed. The main site feed is simply a number of entries that may or may not include all the changes.

    The mirror feed takes these updates, and includes them in final form. The main site feed is supported by several comment-feeds and change update feeds. The mirror feed is like an umbrella, aggregating all this information into the most recent version of the feed.

  • Steps: How does a mirror feed work?

    The mirror feed simply takes the most up to date information and presents it. The mirror feed strips out all the changes, and provides the most recent version.

    These changes are assigned to the change-trace cache where readers can reference them.

  • Demonstration: How does the mirror feed report and publish content changes on a single page?

    Working with single pages or spot-feeds can be interesting. The first problem is to assign a unique URL, then link that URL with the original feed.

    What the mirror feed does is tag the original content, assigns the changes to a separate file, then looks at the time stamps associated with each change.

    The most recent changes and time stamps take precedence.

    The mirror feed first looks at the original feed, and then incorporates each change step by step. These final changes are tagged with greater weight. During subsequent reviews, these later changes are referenced first.

    In time, there may be an interest in reversing the steps and retracing the changes in the feed. There is an undo-function in the feed as users want to strip away the versions. Users will be able to state specifically how much they want to back-step the change-long in the main feed, just as they can tell a browser to increment a URL by X-numbers with a one-click command.

    When working with a single page, the mirror feed receives a notice that there has been a change. The aggregator-bot will then drill into that URI, compare the most recently read content-version with the changes, and then show the differences.

    Once the user marks the changes as all read, this will time-stamp the user-aggregator's change-log. The next reader visit will start the review at that particular time.

    Once the publisher issues a command to open the file, this will act as a signal to the aggregator that a potential change is coming. The aggregator will then jump to the other steps to ensure that potential conflicting tasks and priorities are cleared out.

    The mirror feed will then detect that the publisher has closed out the content with changes. The signal will be the publish-command. The mirror feed will then go into the feed, look at the protocol, and seek the publisher’s batch file of the comment.

    The aggregator’s bot will compare the reader’s time stamp [when last reviewed] and compare it to the publisher’s time stamps on the changes. The changes in the publisher’s content since the last-reader-review will be grabbed, and brought into the readers aggregator.
    When the user-reader is done reviewing the updates and changes, they will signal that all has been read. This will once again stamp the aggregator with a code.

    The users can choose to back-step along each publisher change, or since each reader-review to see the changes that have occurred. The reader could either look at the changes since the last publication date, or the changes based on the reader’s review of those changes.

  • Proof: What coding objectives are met and how is the user ultimately served by this approach?

    See the modules below, which are designed to work without either duplicating current methods; or creating inefficiencies by over engineering something of no value.


    Comparing current approach to the mirror approach


    Currently, the receiver creates a subscription list of what they are reading. Under the mirror approach, the receiver registers with the publisher:

  • I’ve read X-content-changes on this date;
  • If there are updates let me know without me or you asking;
  • I want to be notified even if servers, bots and pings do not occur; and
  • Here is my permission to submit data updates with only a publish command from you.

    As you can see there is a greater level of trust here between the content provider and the subscriber. There’s more data flowing back and forth without outside intervention or notification.

    With the mirror fees, a reader-user can choose to be clear, specific, and open with their project and search objectives.

  • If they’re using a two-way feed search tool, the information would be included as an attachment to the mirror feed.

  • The project objectives become key terms to create frameworks for the content providers to tailor content, updates, and changes

  • Publishers can be in a position to add substance to the reader-searcher’s emerging search structure.

    Mirror feeds can also be used in other ways


    When you think of a mirror feed, don’t focus exclusively on a feed or an aggregator. Think much larger.

  • Mirror feeds give the publisher a narrow number of choices to focus follow-ups

    The nice thing about a mirror feed is that the publisher can create updates for the specific mirror-request. The mirror feed is also something that the content providers can compare the search requests to see how the reader-searcher’s search-requests are getting filled in terms of searches, blogrolls, and feeds.

    The publisher would then look at that mirror feed, compare the searches, and tailor the responses and content changes to ensure that the search is getting addressed.

    Mirror feeds also give the publisher tools to choose subscribers with greater information. Publishers can target specific audiences with content, and get feedback on the feed-searches whether that content revisions are attracting the desired reader.

    The mirror feed moves from a broad array of vague feedback [analog-ish] to more discrete feedback [digital with multiplexers hitting specific content, changes, and searchers].

  • Mirror feeds can be used by the publisher to track searches, content request, and then direct information to that segment. This is a push-focused aspect available when integrating aggregators and search engines in a two-way search.

  • Mirror feeds can be used to by-bass PingOMatic.

    Using a mirror feed means there’s no need to update services that don’t take updates, or whose readers are not subscribing to that feed, content, or spot-feed.

    There’s no need to send pings to all services when only one person is reading the information. By sending the mirror feed once to that reader, not all services will get notified.

    Clearly, the problem occurs when services are not getting pings with updates, but later subscribers show up looking for content and changes.

    This is no problem. Once the subscribers show up and want the old feeds, the old change trace is available. They get with their subscription the mirror feed, the umbrella feed, change traces, and all the time increments associated with previous searches, content updates.

    New subscribers do not get access to other reader’s reviews. This is private, unless readers specifically approve that this information be both collected and released. This has to be a knowing choice in that readers must know that if they approve, anyone could potentially get their information about what they were looking at and when.

    The default option would be not to release this information unless the reader was fully aware of the potential use of this data.

    Ideally, there would be a method to dump the data at regular intervals so that archiving does not become an issue and outside parties are not allowed to access information.

    Ideally, the code could only be accessed through a wifi-system of injecting commands and using a contained-conductor to create the code from building, and not actually injecting code across the firewalls.

  • Mirror feeds are also useful in that the content provider will know when a reader has an upcoming deadline or event.

    This mirror feed could be integrated with a reader’s schedule or plan and identify for the content provider how to tailor information to assist with the reader’s specific objective, if not directly related to the original search request or requirement.

    Again, the goal is to provide novel content in a timely and useful way.

  • Mirror feeds can be integrated with other functions, products, took, searches, feedback, I information, platform capabilities.

    They are a feed that can be used as a tool, not simply information for the end-user to consume ad nausium.

    Content providers may want to highlight new features differently. If they know a specific reader prefers a given format, then the mirror feed could be created and provided in a way that meets the end-reader’s demands both in form, substance, and background detail and supporting links and material

    The mirror feed could be tailored knowing how different consumers and readers focus on specifics, their priorities, and their level of oversight. Again, the mirror feed would have all the data; the issue would be then to give the reader the option how they would access this information.

    In those cases where the content provider knows that a given level of management or oversight has certain requirements for oversight and responsibilities, and these functions are unique to that particular position, the spot-feed could be tailored to a single end-reader with a specific function.

    When the reader changes position, the data could then be reconfigured to highlight the same data, but with a different slant given the new responsibilities.

    Mirror feeds help publishers answer the following questions

  • How can we use a wiki-like structure on a feed?

  • How can we provide readers with a trace of what changed, added, deleted, or remains the same since [a] the last revision and [b] the last review?

  • What method will we use to let the aggregators know that a change has occurred on one page in the deep archives?

  • How will we let our subscribers-readers and other review this content that a single page has been updated?

    Ideally the mirror feed architecture would be tied with not only FeedBurner but any other method of burning a feed.

    This tool would ensure that a single page update worked with any of the other systems supporting feed.

    The system would be developed based on a clear understanding of [a] what method is currently used to burn a feed with a single page; and [b] how an single page is updated, reported, and published in the feed without a ping.

    Demonstration Criteria: System Testing


    These goals need to be translated into measurable quantities that can be incrementally examined during post-development testing, just prior to any formal release.:

    The goal of this system would be to:

  • Send summary information;

  • Send an alert in bit-torrent way that there has been a change; and

  • Support requests for more details.

    The system would auto-send the current-links to content first. Those with the highest probably of having interest would be linked with what routes the bots, data, and logic trails the aggregator takes when choosing between equally valid approaches, options, and tasks.

    System Goals


    The system will successfully demonstrate how each page repots changes with publishing, non-publishing, pings, and non-pings.

    The system will show that

  • it can attach an alert message with a URL of the current status;
  • the receiver would know to check the last reference;
  • there was a valid comparison trace between old and new content;
  • readers could have a method to see what was changed, added, deleted, remains the same; and
  • the change-trace can be moved based on either content-provider change-stamps, or read-review changes stamps either incrementally or in batches of arbitrary amounts.


    Discussion


    The problem with the current system is that even though there is a content change and update, the aggregators may not necessarily find the update, nor report the feed as having new content. Clearly, there is no new content, it’s just an update.

    Aggregators and readers are not on the same wavelength. Aggregators focus on links and feeds [production-support, pushing], while readers focus on content [consumption, pulling].

    The reader wants to know if there’s a change. The user doesn’t care about the links, or what is going on with the publishers’ feed. Again, the reader-user is looking for information; while the aggregator simply focuses on new content and feeds on top of that.

    As much as I hear the aggregators say that they support the users, I see the opposite. I continue to see a system that is designed more with the publication-push in mind, not with the reader-subscriber.

    XML apparently puts slightly more emphasis on content providers than consumers. Which leads us back to the original problem. Why is there not a system that reliability supports content producer’s changes.

    In this case, I can think of no financial benefit to putting the needs of the source of the information behind that of the subscriber.

    Rather, what I detect is a subtle emergence of something that may not have been considered: How do we really handle these deep archive changes for a very small number of subscribers. It may be a subtle nuance today, but it will grow exponentially as the number of publishers, readers, and archived pages grow.

    Indeed, what’s likely is that there’s already a system out there that already does all this. The trick will be to identify those users who want this function, and are willing to pay for it. As feedback for the XML marketing-types, if this product exists I’m not seeing it showcased as a product feature.

    The other thing I’ve been noticing with such a proposed system is that the aggregator-to-content-on-platform comparison would be one that is focused on each reader. This is to say that if we were to go to such a system that uses an aggregator to ping-content, then each aggregator is simply acting as a bot, thereby defeating the purpose of having a PingOMatic that hopes to reduce bot traffic.

    This is another way of saying that the proposed solution to the single-page-update problem could involve solutions that make a bigger mess than had we simply stayed with with the bots and never created PingOMatic.

    Thus, the solution would be to make this single-page-comparison centralized like PingOMatic, but be done in a way that complements what is currently going on, rather than re-invents the proverbial Fred Flintstone wheel.

    Functionally, what could happen is that the users would access the comparison if they select the feed-spot-update.

    If we had this system [whereby reader-users received a change to deeply buried content], what would signal the users that there’s an update to the spot-feed [even if the aggregators do not report those changes unless there is a ping]?

    The problem is that we potentially move into a duplicate system requiring extra levels of comparison. Technically it is feasible; the question remains whether it is efficient. Or whether there is an alternate way to solve these problems:

  • Deeply buried feeds;
  • Updates to the reader-subscriber; and
  • An emerging need for a workaround to the current problem with PingOMatic [with respect to single page updates and deeply archived content]

    Clearly, the short answer is to have each person independently submit each page so the bots can read them. But why do something that takes time, when the goal of these systems is to save time and automate the data transmission. Hello wooden shingles and lead pencils.

    Right now, users have to ping with each update.

  • Is it too much trouble to require them to submit each page when it is updated; or should there be an auto-discovery mode?

  • Or could there be a platform-based approach that centrally records all pages that are changed, then automatically attaches these with the ping-burst, and the PingOMatic is updated to manage these sub-changes?

    I prefer the latter: The streamlined, automated, and transparent mode. Remember, we’re only talking about 7M blogs.

    What’s going to happen in a year, 10 years with multiple pages? This is going to grow exponentially. There needs to be a way to manage the likely changes, and still keep a trace of the changes, all the while ensuring that deeply buried information is updated, reported, and interested readers are given visibility to what has been changed, updated, deleted, or modified.

    Right now, we don’t have that. Or if it exists, I’m not seeing it. And it’s not working. Hello Enterprise, large-E!

    Commands


    Users should be able to say:

    [do/not do] {auto-include updates in the ping-package for their sub-pages in their blog-platform}
    At the same time, there needs to be a way for the bots that are linked with the aggregators and FeedMesh to discover this content, track the changes, and report to the aggregators the information needed to create the mirror feed: The one that is finely polished.

    Users should be able to say:

  • Where they want to add the information.
  • Which folder they want the updates to go.
  • Whether they want the information broken out into a different file, listed as a separate feed, or integrated with the original comments, or displayed in some other manner.

    Users might want to have the subsequent changes integrated with the comment feeds; or auto-assigned to a folder that is simply updates. Either way, the user will not necessarily have a single way of stating how they want these subsequent changes reported and filed in the aggregator.

    More about this Mirror Feed

    Think of the mirror feed as something that is an exclusive list. It’s like a comment feed. It’s above and beyond what the normal site feed is.

    But this mirror feed is something special. It’s a select subscription that is tailored to known readers, interested readers, and people who have already expressed a desire to have updates on each WebPages, and they are interested in a very narrow area.

    The mirror feed can be custom-designed for a special group of readers. And content that is on the mirror feed may be deliberately buried or not published for wider consumption.

    Why? Everyone else has clearly indicated they’re not interested. So why bother announcing to the world [via pings] something they don’t want?

    Clearly, one approach is to create a new feed, add it to FeedBurner, and be off with it. But this doesn’t really address the issue of re-integration. It would be nice if there was a way to have the main feed aligned with these comment-content-page-updates, but not require the publishers to create a special feed for each page, and then have a new listing in FeedBurner.

    If we break the feed off from the main site, this integration function will be lost, or require a re-integration command. One approach is to simply design the feeds to be integrated when they are created; as opposed to designing something that recombines something that didn’t necessarily have to be segregated to begin with.

    Again, we’re not creating new feed-accounts. We’re simply adding a special spot-feed because we want to have timely updates, and we also want this update to fall under the umbrella of the main site.

    This is where the umbrella feed comes in. Our main site feed would have overall control of the umbrella; while the specific-page-feeds would also fall under that.

    Imagine having a FeedBurner account where you have one set of statistics. Then for each page, and its update, you had a greater level of detail, showing how each page-spot-feed was getting attention.

    Mirror feed updates

    What these new feeds will have to be able to do is to have a signal to look at the prior events that occurred that would prepare the system to provide updates.

    Think about this. When we’re tracking aircraft, we don’t rely on the sonar. We use radar, which sends out a signal, that signal interacts with the aircraft, and the aircraft underbelly, wings, and cross-section interacts with that signal, and then the object acts as a transmitter and says here I am. Sonar is different. It relies on a bounce.

    What these feeds need to do is move away from sonar, and start noticing what is going on with radar. There are things that are occurring on the platforms that are not related to pings, but still provide valuable information related to the impending the changes.

    By this I mean that the aggregators and confirmed subscribers could benefit by focusing on what prepatory commands are occurring that would suggest a change to a webpage is about to happen.

    Again, under this model, we’re focusing on getting information to the reader. The goal of this would be to ensure that the updates are timely given.

    If we can’t focus on actual changes [because the aggregator is too busy; or we don’t want to send bots], then the signals prior to the change need to be the prepatory or alerts signals for the aggregator [that-has-a-confirmed-reader] that something of interest to the customer is probably on the way.

    So, in anticipation of something about to happen, the bots would then be deployed to work on other tasks; that way, when the actual command shows up, the system support can then be stop the other tasks, get refocused, and allocated to delivering this highly prized content: A single-page from many years ago as an update for a single subscriber.

    This could be a server with a multi-plexer. The question is whether XML and aggregators want to move from a simple system to something that leans in a greater direction toward this type of task-switching in an aggregator platform. If you’re able to do trillions of calculations per second, it may not make much difference.

    But think about 10 years from now. More blogs. More XML feeds. And more archives. Where the number of changes in higher.

    Think about the back-log issues of aggregators. Would it be smart now to design a system that is prepared to handle this expected backlog? When the NYSE designed its upgrade it wasn’t looking at the number of shares traded when the market first opened and they were making scratch marks on wood shingles.

    No, the system was designed with the likely demand loads. Today, PingOMatic doesn’t appear to support these types of retroactive changes to a single, archived page. It remains to be understood whether FeedMesh will support these focused-archived changes.

    The aggregator-designers may have this in mind. But I have yet to understand whether the XML community as a whole is consciously designing aggregators to have a specific task-management function with the old-content-being-updated-and-pinged-with-bots paradigm in mind. It's clear that the current platforms do not.

    Given that the workload, changes, and volume of information is growing faster than expected, it is likely that the potential challenge for aggregators, archives, pings, and tailored feeds will fall off the radar. That is if it ever gets on there.

    But that doesn’t make the likely problem disappear. Just that more annoying when the world realizes what could have been done earlier.

    Getting back to the radar-vs-sonar question. The issue for users will be how to rely on the aggregators to signal that there is a change in content without a direct ping; and uses something that focuses on prepatory commands within the content-providers platform. The idea is that we don’t rely on a ping to notify of a change; but we rely on the external platform’s reaction to a prepatory command as the signaling event.

    This, as opposed to the current system that:

  • waits passively for a ping that only looks at the surface;
  • does nothing to integrate the comment feeds with the specific original blog; and
  • fails to reliability report a single page update from the archives.

    Possibly one solution is to permit greater data exchange between the content provider and subscriber. Perhaps one solution is for the content provider to signal the FeedMesh anytime that an old file is accessed; and then a final check when it is closed.

    Perhaps the solution would be to package these changes at the content provider’s location. In turn, this information would be sent as a single stream when the user is signaling:

  • that they are about to leave; or
  • it is cached [like Excel does if the system freezes up, and it saves your latest version].

    Then the system would automatically send the content changes to the spot-feed next time the user signs in and is doing work on their blog or publishing system.

    Ideally, this would be something that is transparent, but something that the user, publisher, reader, and XML platforms make a conscious decision about. Again, whether the benefits outweigh the workload remains to be articulated.

    What can be said in a general sense is that when the reader subscribes to a given feed, they should also have that subscription interpreted by the platform to mean the reader wants, among other things:

  • updates;
  • to be told of updates,
  • a notice that the updates are available;
  • the updates to reliably report where they want them displayed
  • the option to ignore the original version and stick with the final version
  • their aggregator to display the difference between the original content and the changes;
  • their aggregator to do what it has to work with content publishers to make sure the content provider can get content to the reader [to include data sharing, special access privileges];
  • the system to put [this-URI-in-this-aggregator] on the ping list that gets notified when there are future publications of content changes in the deep archives.

    These commands would exist and be maintained along the mirror feed, and continue for perpetuity until the feed was canceled or deleted. Remember, there are people who may choose to pass down their feeds in their estate as intellectual property.

    Enterprise: If it was obvious it would not be patentable

    The enterprise in all this is to create a system that is independent, can work with and integrate with the existing and likely tools, and is also one that is robust enough to survive crashes, and still maintain its flexibility to adapt to the upcoming changes in volume, features, and user demand.

  • What changes do we anticipate occurring in the XML platforms?
  • What features, systems, and designs can we develop today that will accommodate these expected changes?

    Single page update


    It’s interesting to run a simulation of the above features related to a single-page update. To make a long story short there’s an opportunity to create some one-step functions that we saw at Amazon.

    Let’s go through some of the steps an end-user has to go through to actually do a manual one-page update; and then review content.

    First, remember that users are currently unable to rely on the webpage platforms to deliver updates through Newsgator. There’s no visible ping coming through. Also Bloglines does connect to the feed, but it too doesn’t actually display the updates as an increment.

    Requirements


    What’s needed is a system that does the following:

  • User enters published item [signal for reader-aggregator that a change is possible]
  • User makes changes [prepatory command]
  • Platform detects changes saved [publish command]
  • Batch file consolidated of changes, cached on publisher platform
  • User closes out site [prepatory signal to have the content reported to the subscriber]
  • Batch-file updates are forwarded to the specific subscribers.


  • Today, we don’t have this. What actually happens is the following, and the revised system needs to fix these problems:

    First, users when they save content are not given a ready-link to create a single feed.

    Rather, they have to go to a separate site to create a spot-feed, then re-enter this spot-feed manually. This is only possible after publishing the spot, grabbing the URL, then throwing it back into the content. This is a single platform. I should be able to know what my exact URL and URI will be before I publish, and have this automatically loaded when I signal that I want a link to create a spot-feed.

    Second, there’s no quick way to inject that newly-crated-spot feed into the spot. They have to manually open the feed.

    We’ve got robots that can take java code from an external platform and create a new comment-protocol that is integrated. I see no reason why the same type of approach could not be applied. Indeed, since this is my platform there’s no reason why it can’t happen: I don’t have to give myself permission to inject a special code on my platform.

    Third, there’s no quick way to document changes, link them with time, or ensure the administrative tasks are organized.

    This is problematic. These multiple tasks that could be automated and integrated into a single step, just as we have seen on Amazon. Currently, users have to manually:

  • flag the content,
  • inject a date time stamp by referencing an external feed;
  • assign a tag to that content,
  • create a schema to integrate this spot-feed-tag with similar tags;
  • advertise this new tag; and
  • ensure that specific content-consumers are aware of the changes.

    Fourth, the tagging mechanism is not integrated with these special updates.

    Who would’ve though that we’d have code and content updates occurring faster than the tagging protocols could keep up with. I’m not sure what’s getting developed faster: New aggregators to integrate tags, or new tags to chase aggregators. It’s all so confusing!

    Fifth, there’s a problem with the new URLs. For some reason the aggregators do not appear to handle some URLs, while other platforms can handle the extended URLs. What’s needed is a mechanism to shorten the URI to something that is manageable, but also linked to both the original URI, and at the same time is an umbrella to the original feed, and something the existing readers are going to recognize.

    At the same time these shortened-URIs need to be integrated and support the likely code-matching function required under the two-way search engine.

    I see cache. An archiving function. And a system of code-matching. This calls for a computer, Fred Flintstone!

    Summary Requirements


    Ideal: Let’s review the user requirements

    A system that:

  • Reports content changes to single pages along a feeds;

  • Provides updates and reports changes to the aggregators;

  • Allows easier data flows between confirmed subscribers and content providers; and

  • Allows for content changes to be delivered regardless how deep or old they are in the archive


  • Summary Goals


    Requirements: Let’s define the system goals and final architecture

  • A single page update feature In PingOMatic

  • A deliver-update based on publishing content changes


  • XML Concepts: Enabling technology required to accomplish these feats


    Auto-discovery of content updates


    Aggregators should, when reader-subscribers open the platform, be on the lookout for content changes since last review.

    Aggregator bot with multiplexer

    The aggregator would interface with the publishers to ensure that the reader-subscriber-aggregator is focusing on tasks that would otherwise get overshadowed once the content changes are reported.

    Prepatory command detection on content providers’ platform status

    When a content-provider opens archived content, a publisher’s system will alert existing subscribers’ aggregators that a potential changes are on the way.

    Publish-notify concept

    Once the archived document is re-published, the publisher’s system will send a ping to the existing feed-page-content-subscribers.

    Workaround to the PingOmatic to allow lower level updates

    This is a method to include single-page feed-updates underneath the site-feed-umbrella.

    Tagging updates in the XML servers and search tools

    This takes the existing content as reported to the search platforms like Technorati, and ensure that appropriate tags are also indexed when the content is updated.

    Support for the deep archiving and updating of old content in searches, access, updates

    There needs to be a method that ensures the archived content is searchable; that updates are available for both searchers and existing spot-feed subscribers; and that when content changes in archived content that the spot-feeds are updated and reader-subscribers get notified of these changes either in summary or in detailed form.

    Aggregators that have mirror feeds, can create, and support mirror feeds

    These mirror feeds are polished, integrated, and show the summary version; and the feeds are umbrellas over the raw feeds which show the detailed changes, and the sub-page feeds.

    Coding: Modules to accomplish the tasks

    Let's go through some of the modules needed to accomplish the above system goals, and implement these XML Concepts.

    Module 1: Identify existing subscribers to current feeds

    This module would ensure that the subscribers to current feeds are known, correct, and that there is a link between what the content-readers want and what they are actually getting.

    This module would do checks on parallel feeds to make sure that the content-changes that are getting reported match the other channels that content-changes are also available.

    The module would do sampling on subscriptions from different subscribers to ensure that the content changes given to one subscriber match those given to another subscriber of the same content feed. Where there are differences, these trends are noted and a bot would then be deployed to investigate the nature of the differences.

    Module 2: Publish-notify

    This module would act as an interface between the publisher’s platform and the reader-aggregator’s notifications. The bot would get prepared when the archived content is accessed; and the bot would be activated once the content is republished.

    Once the content was republished, the bot would go along the existing channel between the content-provider and the existing subscriber to ensure that the spot-feed was updated. This would be independent of any site-feed-update.

    Module 3: Lower threshold alert on aggregator

    Once the subscriber confirms they want this URI, the publisher and reader then have lower thresholds to exchange data along an XML feed. The reader no longer has to rely on a bot to get notifications of a single feed; and the publisher no longer has to ask for permission to send updates.

    Rather, once the reader subscribes to the spot feed, the reader then gives permission for the bot to use the XML feed to make these notifications without a ping.

    Module 4: Receive commands from publishers for aggregator time management

    Once the aggregator receives notice that the archived content has been accessed, the aggregator would shift focus to other tasks.

    Once the publish-command was issued, the aggregator would then refocus on the incoming data to ensure that the content was adequately showcased, the links were valid, and the content updates were correctly reported, and the reader was correctly notified of the change in content status.

    Module 5: Report content changes without ping; publish to existing subscribers

    After executing the publish-command, the publisher’s system would send a ping automatically to the confirmed subscribers.

    Module 6: Report mirror feed

    The mirror feed is the polished version of the content feed. The mirror feed includes all changes, a trace on content changes, and provides a system for readers to trace the content changes both forward and backward in time.

    The system would correctly change the content and maintain the polished feed. There would be seamless flow of information from the page update commands.

    Module 7: Monitor umbrella feed

    This module would report umbrella feed, with sub feeds; correctly report content, and link content changes to original version; and ensure that content is displayed how users want it.

    The module would ensure that the most current mirror feed correctly placed the content changes in the correct spot-feed, and that the order of the information was correct and consistent with the time-stamps.

    This module would make sure that the subfeeds were correctly assigned to the correct umbrella feed; that the tags and links between the sub feeds and umbrella feeds were consistent; and that there was a valid method for ensuring the assigned tags to either feed also applied to the other feed.

    Module 8: Check module

    This module’s purpose is to ensure the subscription request is accurately matched with the content. These involves data checking and updates, and cross channel sampling to ensure multiple channels with similar-same feeds are reporting the same content.

    This module would ensure that the umbrella feed as displayed actually matches the umbrella feed that is forecasted. There would be sampling within this module.

    The module would compare the actual umbrella feed with the forecasted feed. There would be sampling on the time-stamps to ensure that the actual feeds match the expected order and flow. Deviations would be reported.

    Module 9: Reconciliation module

    This module would look at what was specifically not working correctly in the outputs. This module ensures that differences are investigated and resolved. Some problems might be:

  • Coding problem;
  • Bugs in one of the platform;
  • Time-stamps were not working; or
  • Mirror feed content-order was not consistent with the timestamps.

    This module includes an auto-notify of summary reports to corporate board and likely impacts to schedule changes and funding requirements.

    Module 10: Simulation

    This module would do stress testing on the platform. This module looks at the change in number of pages that are archived, and forecasts the likely workloads and ping requirements.

    This module shows the number of changes, the data management requirements, and the hardware likely required to support.

    The goal of this module is to identify trend lines where growth rates at various exponential rates would outstrip available supplies.

    This module can act as a catalyst to communicate new innovation needed; and act as a catalyst for other discussion on whether the baseline assumptions need to be rethought; or whether new technology would better manage the system.

    The simulation module also identifies defects in current standards, and works with academia to act as an incentive to develop easier methods to accomplish tasks, or develop new coding alternatives to XML, or get refinements in the standards to update and get a more reasonable solution given the simulated demand loads.

    Summary

    You have a roadmap to create a system that supports spotfeeds and can report chanages to deeply archived data. Good luck!
    " />