09 June 2005

FeedMesh: PubSub returns are problematic if trying to get to original content-sources

Finding the original content can be difficult.

I've been getting info through PubSub on targeted topics.

Small problem, as I've already stated, is that I have no way of going back to the original source. There's no link in the PubSub list. Ideally, PubSub could include a link in the title.

No evidence services are sharing content info

However, my real concern is with FeedMesh. I'm hearing Rumblings that "things are going well" and "the services are exchanging pings" and "the Feedmesh is working."

Small problem: Given that I can't find the original content-link in the PubSub returns, I've been taking the quotes I do get and throwing them into Technorati, BlogDigger and google.

Guess what? I don't get a return. Which tells me: Either PubSub isn't talking to the other services; or the FeedMesh really isn't exchanging Ping.

Either way, you can talk all you want about RSS, FeedMesh and some sort of "service exchanges" . . . I can't use the outputs to go back to the original source.

How long do you want me to wait to "eventually find this stuff" . . . RSS was supposed to expedite the communication, not create a bottle neck and require me [the researcher] to then go back and wait some arbitrary time to then review the material.

In other words, instead of having to "check the website over and over again" [the selling point of RSS], now I have to check the "clips from the RSS feed" and then take that and try to find it in the original content over and over again.

Alternatives

Wow! Maybe this is an enterprise opportunity:

  • Develop a system that will "automate the back-search from a PubSub output, find the original content, and then deliver those with the original RSS feed; or

  • find the original links, and then marry those up with the outputs; or

  • how about a system that the user can click on content, and then the aggregator takes that content-clip, creates a retroactive-search, and throws it into a "pending file" and then when it finds the original content it shows up. No more searching!

    In practice, it would be "simpler" to just include the link in the PubSub output. That is, unless you want to use XML to do retroactive searches with auto-links and injection into the aggregator. Remember XML was supposed to solve this, not self-evidently create more content that "only XML can solve.

    -- This is the end of the content --
  • Finding the original content can be difficult.

    I've been getting info through PubSub on targeted topics.

    Small problem, as I've already stated, is that I have no way of going back to the original source. There's no link in the PubSub list. Ideally, PubSub could include a link in the title.

    No evidence services are sharing content info

    However, my real concern is with FeedMesh. I'm hearing Rumblings that "things are going well" and "the services are exchanging pings" and "the Feedmesh is working."

    Small problem: Given that I can't find the original content-link in the PubSub returns, I've been taking the quotes I do get and throwing them into Technorati, BlogDigger and google.

    Guess what? I don't get a return. Which tells me: Either PubSub isn't talking to the other services; or the FeedMesh really isn't exchanging Ping.

    Either way, you can talk all you want about RSS, FeedMesh and some sort of "service exchanges" . . . I can't use the outputs to go back to the original source.

    How long do you want me to wait to "eventually find this stuff" . . . RSS was supposed to expedite the communication, not create a bottle neck and require me [the researcher] to then go back and wait some arbitrary time to then review the material.

    In other words, instead of having to "check the website over and over again" [the selling point of RSS], now I have to check the "clips from the RSS feed" and then take that and try to find it in the original content over and over again.

    Alternatives

    Wow! Maybe this is an enterprise opportunity:

  • Develop a system that will "automate the back-search from a PubSub output, find the original content, and then deliver those with the original RSS feed; or

  • find the original links, and then marry those up with the outputs; or

  • how about a system that the user can click on content, and then the aggregator takes that content-clip, creates a retroactive-search, and throws it into a "pending file" and then when it finds the original content it shows up. No more searching!

    In practice, it would be "simpler" to just include the link in the PubSub output. That is, unless you want to use XML to do retroactive searches with auto-links and injection into the aggregator. Remember XML was supposed to solve this, not self-evidently create more content that "only XML can solve.

    -- This is the end of the content --
    " />