18 December 2004

When I say "let my feeds line up against my milestones" ... I'm not talking about the BEM

Scott et al,

With respect to the BEM-stuff....

My view of "matching personal tasks to feed" isn't to RSSify my events, but to do the opposite. It is to take my calendar, and then find the feeds that relate to that calendar; then display the feeds in a priority that highlights my objectives.

In a nutshell BEM and I are talking about opposite things. BEM is to publicize events; I'm talking about using my personal calendar to prioritize which feeds appear first on the platform of my choice [aggregator, webpage, blog etc]. Both rely on things that do not yet exist.

BEM simply aggregates calendars. The next step is to then allow the individual user to use this "aggregated calendar information" to target their personal feeds in support of those "whatever milestones and tasks."

The aggregators working with the search engines should be doing the work, not the end-user. As it stands, the RSS-XML related technology require the end-user to manually load feed-links to read them, and then wade through the morass to find what is most important relative to a given task.

I'm not interested in content or feeds; I'm interested in finding applying reliable information to support my objectives-results-outcomes. Success is not measured in how many feeds I have listed in my aggregator; its in terms of what I produce and deliver in a manner that generates positive operating cashflows, and in a manner that exceeds the opportunity cost of the alternatives.

I'm not clear what people have been doing as in, "RSS and XML has been around a while and the end-user's epxectations are not getting met." Compared to what I've seen, my expectations are quite "high" on what the feed-business is all about. Here are my other thoughts on what a "current" RSS-community would look like.

RSS-XML community is not the never-ending gravy train for bridge funding. Users need to see credible products that solve problems, not create more non-sense to wade through.

Ref Ref Ref Berkeley Event Model is interesting, but it's not what I'm talking about.
Scott et al,

With respect to the BEM-stuff....

My view of "matching personal tasks to feed" isn't to RSSify my events, but to do the opposite. It is to take my calendar, and then find the feeds that relate to that calendar; then display the feeds in a priority that highlights my objectives.

In a nutshell BEM and I are talking about opposite things. BEM is to publicize events; I'm talking about using my personal calendar to prioritize which feeds appear first on the platform of my choice [aggregator, webpage, blog etc]. Both rely on things that do not yet exist.

BEM simply aggregates calendars. The next step is to then allow the individual user to use this "aggregated calendar information" to target their personal feeds in support of those "whatever milestones and tasks."

The aggregators working with the search engines should be doing the work, not the end-user. As it stands, the RSS-XML related technology require the end-user to manually load feed-links to read them, and then wade through the morass to find what is most important relative to a given task.

I'm not interested in content or feeds; I'm interested in finding applying reliable information to support my objectives-results-outcomes. Success is not measured in how many feeds I have listed in my aggregator; its in terms of what I produce and deliver in a manner that generates positive operating cashflows, and in a manner that exceeds the opportunity cost of the alternatives.

I'm not clear what people have been doing as in, "RSS and XML has been around a while and the end-user's epxectations are not getting met." Compared to what I've seen, my expectations are quite "high" on what the feed-business is all about. Here are my other thoughts on what a "current" RSS-community would look like.

RSS-XML community is not the never-ending gravy train for bridge funding. Users need to see credible products that solve problems, not create more non-sense to wade through.

Ref Ref Ref Berkeley Event Model is interesting, but it's not what I'm talking about.
" />

They say they're listening, but I don't believe it

LazyWeb Request

Ref Ref These are the RSS-related things that I would like someone to solve: Show me where the solutions are, who's working it, and when they're going to make their products more visible for the end-user.

The list

This is a list of ideas. To the right, you'll see some of the "popular posts" which outline some of the more blog entries related to innovation, user-end commentary, and suggestions for improvement.

I might direct you to the "what this blog is about" for a focused look at the specific recommendations for the RSS platform. In there you'll find various comments on the RSS system, the architecture, and the components of the XML-RSS pipeline. Ref

Highlights of what I'd like to see in the RSS community:

1. A clean story on why there is so much emphasis on a "valid" feed, despite aggregators being able to work with "invalid feeds." Yahoo can take an "invalid" feed. From the customer's perspective, anything that the aggregator can read and deliver to eyeballs [regardless syntax rules] is valid. Ref

2. Cross-aggregator searching. Currently, each aggregator is an isolated pod. I'd like to see something that crosses all aggregators with a simple search. One search, all aggregators. I'd like the aggregators to be able to issue their "massive list of feeds" as a single chunk to the search engine, that the aggregators get consideration in return.

I want to have a java-like image of the feeds as we can see in NetNewsIsFree. A single picture of the feeds.

3. XML Pipeline status I want to know with a single image what is going on with the various components of the RSS pipeline. I want to be able to see which aggregators, validators, syndication centers, and bloghosts are up, down, having problems, or reporting errors. Ref

I want this real time. I'd like to have this status fed in from each component and all feed-related personnel have the ability to see the single pipeline status: From my blog, through the validators, aggregators, along the internet-pipeline, into the pingers, and then finally appearing to the reader's eyeballs. More

4. One-feed-submission. Right now, I can go to a list, find 100+ sites to submit a URL. And I'm off to the races. This is a waste of time as it takes 16 hours to carefully document all feed-return messages. I'd like to be able to submit my feed-URL once, and have a system that automatically syndicates that URL to all the aggregators; and the aggregators automatically-seamlessly incorporate this information.

5. A method to better integrate feeds. I'm not talking about "mixing feeds" into an aggregator. Rather, I'm talking about having a platform that permits the user to actively integrate and mix blog-entries. I can then take a blog-entry from Feed 1, put it right next to another news item from feed 2 [a different search], and create a new pattern.

I want the integrator [not the aggregator] function to find new patterns within my combined feeds. I do not want to prompt the computer or feeds to integrate; rather, I want to look at the outputs and make decisions. Let me organize based on integrated feeds, and don't make me click on each feed separately like a list of bookmarks. Ref Ref Ref

6. Feeds linked with personal milestones. This is not the same as RSSification of calendars. Rather, when I say "Feed management" I'm talking about allowing my aggregator to automatically adjust feed-searches and delivery-priorities based on my personal tasks I've loaded into program management software. This is analogous to change "what appears first" in my "feed display page" not based on the provider-input, but on my needs as a function of personal objectives, tasks, milestones, activities, and upcoming events.

In January, I want at the top of the list Spring stuff not Winter-stuff; when I'm getting ready for holiday cooking, I want to have at the top of my list "turkeys" not "current news." I want the aggregators to take my schedule and do this automatically; I want to aggregator to read my schedule and create the priorities for me that I can, if I want, adjust. Ref Ref

7. Feeds linked with tasks. I'd like to have a task listed from 0 to 100%. At the bottom of my tasks, I'd like to have check-boxes so that I can choose "which sub-tasks" I'd like to do to go from 0 to 100% completion. Based on what I choose from that template, I'd like to have feeds stacked up based on criteria I choose: Relevancy, content, relationship to milestone, etc. I'd like to have these templates to be uniform like MSFT's "make newsletter" templates. You make it, I adjust it.

8. Feed publishing tools. I'd like to see a blogging system that works like powerpoint in that it provides miniature blog-entries that I can flow along my feed. The system would permit me, as the producer, to change the order of blog entries, get a big-picture perspective of my blog-entries, and let me time my blogs along my feed. All this before publishing. This will allow an integrated-theme to my blog entries.

I want a black-board view of my planned blog entries. I want to be able to have multiple blogs in the pipeline in development. And I want to have the option to auto-publish blogs so that my feed-readers can have time between blog-entries to digest my information. Ref

9. Feed translation. I want an auto-translation between foreign language feeds and my native languuage; Ref and I want my searches to auto-matically get translated into foreing languages. Ref

10. Link incorporation. A system in the feed publishing system that permits me to back-link to blog entries I have not published. Right now, in order to get a valid URL to link in my feed, I have to have an 'already published blog' to include that link. This doesn't help me if I have multiple blogs in a package of information I want to provide as a unit, but I haven't yet published the last of the blog entries in that package, yet the first-blog entry requires a link to that last-to-be-published system.

I want to be able to publish new blogs that list URLs for blogs that have not been published, and know with confidence those "guessed" URLs are going to be valid not only in my blog, but in the feed. If that is not possible, then create a system that seamlessly lets me, as the user, override previous feed-entries in the aggregators and replace the most current feed with that which is outdated and has less valid links.

11. Backwards-searching in auto-searches. Right now, in pubsub if I create a new search, it simply "starts the search now" and doesn't go back in time; I can type in a popular term like "RSS" and get no hits. I'd like to have the option to define "time windows" so that I can focus my search either prior to the search-start date, or whenever.

12. Feed stacking. This is how google does it's work. It takes all the information. And the common-similar news items are not duplicated. Rather they are stacked. I'd like to see a pre-set-layout page like Google has on my aggregators. I do not like the thought of "reviewing all my links in my aggregator" in order to see the feeds. This means it's no better than a list of bookmarks.

13. RSS feed screening criteria. I'd like to see some scholarly research on those RSS feeds that are examples of "high value" vs "low value". This would permit the outside users to have some credible metrics to baseline their content to know how they're stacking up. Also, this would give some benchmarks for RSS-consumers to quickly evaluate RSS feeds for content, reliability, usefulness, relevancy, and validity. If the "real answer" is to use the HTML-related metrics, then let's admit: HTML is not dead, but relevant.

14. Developer feedback. I'd like the various aggregators and other components in the RSS-XML pipeline to provide some specific examples of blog-entries that were useful; and a demonstration of a specific blog-entry translated into an actionable item, and an end-result we can see today. As it stands, I hear this "we take inputs," but I'd like to see something specific so that I can tailor my inputs to be more useful. As it stands, I'm spitting in the wind and do not know which way to adjust my writing, focus.

15. Developer actions. I'd like a big-picture of where this RSS-XML stuff is going. Apparently there are many development efforts working on inputs. Brilliant. At this juncture, I have no concept of how my inputs are or are not fitting in "where things are going." I'm not asking for proprietary information; rather a general sense of where things are going so that I don't duplicate other comments; and at the same time provide something that is challenging and useful.

I want to see where XML-RSS feed development is going with a single image and know how my suggestion fits in with the overall momentum.

Summary

Ref Ref Ref You didn't find this; I found you. That's the problem with the blog-net-RSS-XML platform as it stands. You can't listen unless you have access.

There's no reason you should have had to wait this long [since November] when I started documenting my end-user reaction and thoughts.
LazyWeb Request

Ref Ref These are the RSS-related things that I would like someone to solve: Show me where the solutions are, who's working it, and when they're going to make their products more visible for the end-user.

The list

This is a list of ideas. To the right, you'll see some of the "popular posts" which outline some of the more blog entries related to innovation, user-end commentary, and suggestions for improvement.

I might direct you to the "what this blog is about" for a focused look at the specific recommendations for the RSS platform. In there you'll find various comments on the RSS system, the architecture, and the components of the XML-RSS pipeline. Ref

Highlights of what I'd like to see in the RSS community:

1. A clean story on why there is so much emphasis on a "valid" feed, despite aggregators being able to work with "invalid feeds." Yahoo can take an "invalid" feed. From the customer's perspective, anything that the aggregator can read and deliver to eyeballs [regardless syntax rules] is valid. Ref

2. Cross-aggregator searching. Currently, each aggregator is an isolated pod. I'd like to see something that crosses all aggregators with a simple search. One search, all aggregators. I'd like the aggregators to be able to issue their "massive list of feeds" as a single chunk to the search engine, that the aggregators get consideration in return.

I want to have a java-like image of the feeds as we can see in NetNewsIsFree. A single picture of the feeds.

3. XML Pipeline status I want to know with a single image what is going on with the various components of the RSS pipeline. I want to be able to see which aggregators, validators, syndication centers, and bloghosts are up, down, having problems, or reporting errors. Ref

I want this real time. I'd like to have this status fed in from each component and all feed-related personnel have the ability to see the single pipeline status: From my blog, through the validators, aggregators, along the internet-pipeline, into the pingers, and then finally appearing to the reader's eyeballs. More

4. One-feed-submission. Right now, I can go to a list, find 100+ sites to submit a URL. And I'm off to the races. This is a waste of time as it takes 16 hours to carefully document all feed-return messages. I'd like to be able to submit my feed-URL once, and have a system that automatically syndicates that URL to all the aggregators; and the aggregators automatically-seamlessly incorporate this information.

5. A method to better integrate feeds. I'm not talking about "mixing feeds" into an aggregator. Rather, I'm talking about having a platform that permits the user to actively integrate and mix blog-entries. I can then take a blog-entry from Feed 1, put it right next to another news item from feed 2 [a different search], and create a new pattern.

I want the integrator [not the aggregator] function to find new patterns within my combined feeds. I do not want to prompt the computer or feeds to integrate; rather, I want to look at the outputs and make decisions. Let me organize based on integrated feeds, and don't make me click on each feed separately like a list of bookmarks. Ref Ref Ref

6. Feeds linked with personal milestones. This is not the same as RSSification of calendars. Rather, when I say "Feed management" I'm talking about allowing my aggregator to automatically adjust feed-searches and delivery-priorities based on my personal tasks I've loaded into program management software. This is analogous to change "what appears first" in my "feed display page" not based on the provider-input, but on my needs as a function of personal objectives, tasks, milestones, activities, and upcoming events.

In January, I want at the top of the list Spring stuff not Winter-stuff; when I'm getting ready for holiday cooking, I want to have at the top of my list "turkeys" not "current news." I want the aggregators to take my schedule and do this automatically; I want to aggregator to read my schedule and create the priorities for me that I can, if I want, adjust. Ref Ref

7. Feeds linked with tasks. I'd like to have a task listed from 0 to 100%. At the bottom of my tasks, I'd like to have check-boxes so that I can choose "which sub-tasks" I'd like to do to go from 0 to 100% completion. Based on what I choose from that template, I'd like to have feeds stacked up based on criteria I choose: Relevancy, content, relationship to milestone, etc. I'd like to have these templates to be uniform like MSFT's "make newsletter" templates. You make it, I adjust it.

8. Feed publishing tools. I'd like to see a blogging system that works like powerpoint in that it provides miniature blog-entries that I can flow along my feed. The system would permit me, as the producer, to change the order of blog entries, get a big-picture perspective of my blog-entries, and let me time my blogs along my feed. All this before publishing. This will allow an integrated-theme to my blog entries.

I want a black-board view of my planned blog entries. I want to be able to have multiple blogs in the pipeline in development. And I want to have the option to auto-publish blogs so that my feed-readers can have time between blog-entries to digest my information. Ref

9. Feed translation. I want an auto-translation between foreign language feeds and my native languuage; Ref and I want my searches to auto-matically get translated into foreing languages. Ref

10. Link incorporation. A system in the feed publishing system that permits me to back-link to blog entries I have not published. Right now, in order to get a valid URL to link in my feed, I have to have an 'already published blog' to include that link. This doesn't help me if I have multiple blogs in a package of information I want to provide as a unit, but I haven't yet published the last of the blog entries in that package, yet the first-blog entry requires a link to that last-to-be-published system.

I want to be able to publish new blogs that list URLs for blogs that have not been published, and know with confidence those "guessed" URLs are going to be valid not only in my blog, but in the feed. If that is not possible, then create a system that seamlessly lets me, as the user, override previous feed-entries in the aggregators and replace the most current feed with that which is outdated and has less valid links.

11. Backwards-searching in auto-searches. Right now, in pubsub if I create a new search, it simply "starts the search now" and doesn't go back in time; I can type in a popular term like "RSS" and get no hits. I'd like to have the option to define "time windows" so that I can focus my search either prior to the search-start date, or whenever.

12. Feed stacking. This is how google does it's work. It takes all the information. And the common-similar news items are not duplicated. Rather they are stacked. I'd like to see a pre-set-layout page like Google has on my aggregators. I do not like the thought of "reviewing all my links in my aggregator" in order to see the feeds. This means it's no better than a list of bookmarks.

13. RSS feed screening criteria. I'd like to see some scholarly research on those RSS feeds that are examples of "high value" vs "low value". This would permit the outside users to have some credible metrics to baseline their content to know how they're stacking up. Also, this would give some benchmarks for RSS-consumers to quickly evaluate RSS feeds for content, reliability, usefulness, relevancy, and validity. If the "real answer" is to use the HTML-related metrics, then let's admit: HTML is not dead, but relevant.

14. Developer feedback. I'd like the various aggregators and other components in the RSS-XML pipeline to provide some specific examples of blog-entries that were useful; and a demonstration of a specific blog-entry translated into an actionable item, and an end-result we can see today. As it stands, I hear this "we take inputs," but I'd like to see something specific so that I can tailor my inputs to be more useful. As it stands, I'm spitting in the wind and do not know which way to adjust my writing, focus.

15. Developer actions. I'd like a big-picture of where this RSS-XML stuff is going. Apparently there are many development efforts working on inputs. Brilliant. At this juncture, I have no concept of how my inputs are or are not fitting in "where things are going." I'm not asking for proprietary information; rather a general sense of where things are going so that I don't duplicate other comments; and at the same time provide something that is challenging and useful.

I want to see where XML-RSS feed development is going with a single image and know how my suggestion fits in with the overall momentum.

Summary

Ref Ref Ref You didn't find this; I found you. That's the problem with the blog-net-RSS-XML platform as it stands. You can't listen unless you have access.

There's no reason you should have had to wait this long [since November] when I started documenting my end-user reaction and thoughts.
" />

Ping-return messages: They are misleading on your feed listing status

Here's a caution for RSS feed-hosts. If you're getting a "thanks for the ping" message, you may not necessarily be listed.

Don't ask me how but it happens.

Apparently, even though you may have submitted your feed and gotten a "confirming message back" that you're listed, don't take the ping-backs as confirmation things are OK.

You have to actually go in there, find your URL, and make sure it's listed.

Quite a surprise when "all this time" you're getting "thanks for the ping" and "thanks for listing" and "your submission is confirmed" and "here is your confirmation".

Every indication you've gotten is that your site-feed-URL was submitted, and things are connected. Unfortunately, reality doesn't match the available information.

There's a disconnect between [a] whether your feed is submitted; [b] your actual feed status; and [c] whether the ping-message you're getting back is a good indication of how your feed is getting received.

So, you've just listed your feed with x-hundred feed-URLs. How many of them actually worked despite the confirming message back?

Oh, "you don't know which ones you submitted them to...". Ouch, looks like you have no basis to verify whether the "system that they got your feed" actually still has your feed.

Looks like you have two options: Resubmit to them all again. Or ignore it. Either way, you don't know.

And that's merely another problem with RSS. You think you know, but when you dig a little, you find out there's so much that doesn't add up.

Don't argue with me that RSS is efficient when someone has to go to this level of detail to reverify things that "should already work" as advertised. This isn't efficiency. This is a level of fidelity that auditors aren't going to go through.

Let's hope XML and RSS don't turn out to be another White Elephant like Enron. You've been warned.
Here's a caution for RSS feed-hosts. If you're getting a "thanks for the ping" message, you may not necessarily be listed.

Don't ask me how but it happens.

Apparently, even though you may have submitted your feed and gotten a "confirming message back" that you're listed, don't take the ping-backs as confirmation things are OK.

You have to actually go in there, find your URL, and make sure it's listed.

Quite a surprise when "all this time" you're getting "thanks for the ping" and "thanks for listing" and "your submission is confirmed" and "here is your confirmation".

Every indication you've gotten is that your site-feed-URL was submitted, and things are connected. Unfortunately, reality doesn't match the available information.

There's a disconnect between [a] whether your feed is submitted; [b] your actual feed status; and [c] whether the ping-message you're getting back is a good indication of how your feed is getting received.

So, you've just listed your feed with x-hundred feed-URLs. How many of them actually worked despite the confirming message back?

Oh, "you don't know which ones you submitted them to...". Ouch, looks like you have no basis to verify whether the "system that they got your feed" actually still has your feed.

Looks like you have two options: Resubmit to them all again. Or ignore it. Either way, you don't know.

And that's merely another problem with RSS. You think you know, but when you dig a little, you find out there's so much that doesn't add up.

Don't argue with me that RSS is efficient when someone has to go to this level of detail to reverify things that "should already work" as advertised. This isn't efficiency. This is a level of fidelity that auditors aren't going to go through.

Let's hope XML and RSS don't turn out to be another White Elephant like Enron. You've been warned.
" />

posted by Mud's Tests at 19:10
| [?]

Aggregator ranking data: There are limits to their utility and usefulness

Bloggers: Who are you actually competing against?

Let's consider how many firms are using the "ranking" as a basis to negotiate advertising revenues. Here is the 97+/- most popular site on a random aggregator. That's right: just http:// ... without a complete URL.

No content. No valid links. Not taking you anywhere.

And...this blank-http://-link just moved a certain instant-poondit out of the top 100.

You still want to pour more money into aggregators giving you "this" kind of "information"?

You need better, independent data that measures more than rankings and link-backs, but something that actually tells you "whether there is content, eyeballs, and something that is sustainable."

I don't see it. I see empty URLs and high rankings. That makes no sense.
Bloggers: Who are you actually competing against?

Let's consider how many firms are using the "ranking" as a basis to negotiate advertising revenues. Here is the 97+/- most popular site on a random aggregator. That's right: just http:// ... without a complete URL.

No content. No valid links. Not taking you anywhere.

And...this blank-http://-link just moved a certain instant-poondit out of the top 100.

You still want to pour more money into aggregators giving you "this" kind of "information"?

You need better, independent data that measures more than rankings and link-backs, but something that actually tells you "whether there is content, eyeballs, and something that is sustainable."

I don't see it. I see empty URLs and high rankings. That makes no sense.
" />

posted by Mud's Tests at 18:58
| [?]

RSS Venture capitalists: Buyer beware!

Does the word "Enron" make your skin crawl? I suggest you get something for your stomach before continuing.

RSS and XML have been touted as the "new" new-thing for the public to embrace. Caution is warranted. RSS is good for late-breaking news from sources that can't get an official platform. Then again, so is Google. So much for the RSS benefits.

And a small problem for the RSS groupies--many of their claims don't stand up.

Claim 1: The venture capital bridge funding is going into new products.

Not much sense creating "consolidating mechanisms" like aggregators that perform no better than Google. At least Google is self-contained and self-supporting both in content and in software management. Aggregators require constant pruning.

Further, there are too many holes in the existing RSS-XML platforms to credibly believe that the money is going to new-future products. It appears the money is going to back-fill the holes that were not completed the first time around.

We have yet to have a good set of metrics for either the boards or independent auditors to ensure the RSS-XML backbone meets user-requirements. They've had plenty of conferences on standards, yet the developers still hate XML.

Auditors need to check where that funding is actually going; a good story on why those requirements have not already been completed; and how the "new requirements" are going to get completed when the funding is exhausted on the unfinished work.

Claim 2: There's a huge market

Let's also consider the time spent verifying claims. On one hand we have a simple validator that reports an "error," yet this error is irrelevant--the feed can still be read.

Also, the same site reports than less than 2,000 have used the site. Wow, so we have 2,000 people on a planet of 6.2B that have a valid feed. I'm not impressed. It's been how long since RSS introduced? Oh, that's right--the RSS user's group got cancelled because of lack of participation.

Auditors need to get a good understanding of the returns on advertising. Make some credible arguments that the "model that failed under the web/dot.com bubble is going to magically be solved under RSS."

I see nothing but handwaving and Genies. Where's Major Healey?

Claim 3: This has been audited by outside investment bankers.

How much money do they have in the firms they're covering? They don't audit, they professionally market things.

A real audit would look into the number of recurring error reports at a given syndication site and ask, "What's the plan to close out these reports" and inquire into the "why do we have confirmation messages coming back that the site is listed, nothing in the ping-reports that say there is an error, yet the site is not actually listed for subscription?"

We don't have these kinds of answers. Who knows what kinds of questions the boards of directors and audit committees "should" be asking under SarBox, but their eyes are glazed over as they were in the dot-scam mania of the 1990s.

Claim 4: The aggregators do all the work for you.

The aggregators are just glorified bookmarks. They don't actually do any work. At best, the aggregators simply create islands of content that outside search engines cannot penetrate. So much for the "seamless" web, and "hello" retro-grade internet development.

At worst, the aggregators perform worse than a simple bookmark list. This is a waste of time to create this list, transfer the data, and then spend x-months believing you're working with something better that is actually no better.

Congratulations on creating a mechanism to delay awareness of reality.

Claim 5: It's really easy to review many sites.

Wrong. It's more difficult. The current aggregators create the false expectation that things are easy. Actually, there are holes and a personal-book-mark-list works just fine.

It's absurd to suggest the "old web" could not be reviewed easily. Hello, can you say, "Links"? At least the "old web" could be searched, unlike the current aggregators do not expedite reading sites as one simply has to reload-the-feeds one by one.

There's no advantage to having an aggregator over having one privately list the site URLs. At least private bookmarks allow credible-reliable searching.

The existing aggregators have proven themselves wanting, inter alia.
  • Listed-sites with valid-feeds do not appear in the aggregator searches;
  • Valid-key-word searches for known content on the web produces no hits;
  • Content that has reported pinged does not get archived; and
  • There is not method to backfill the content-holes.

    Claim 6: RSS saves time

    Wrong. The feeds when complied simply create larger piles of sites to "not be able to get to." It takes time to re-load, re-check these URLs. That's time that could be spent actually reading content.

    At best, RSS simply creates a new set of rules to confuse people and avoid credible outside oversight. Another delaying tactic. Indeed, the developers despite the confusion and lack of pointed questions still can't get their act together.

    The community has a disjointed XML-backbone, the developers hate it, there are known holes in the system, no credible leadership to address these issues, and all the while the talk about how great it is. Can you say, "Which pod am I hiding the pea under?"

    It's a shell game. The more shells, the greater the difficulty for oversight, workpackage-traces, and credible progress reports.

    I remain skeptical of reports that individual web-surfers can surf hundreds of sites faster. We need some specific names, something more than a website touting this conclusory fact, and some real data to justify the conclusion.

    Got time to kill? Then you don't need feeds. To get listed, you need a full weekend to load 100 feed URLs, document the report-back-messages, and then verify the messages actually generated an accurate status-message. Hello, "logic disconnect" that the Street loves to exploit.

    Other questions For those who claim they can read 100s of RSS-feeds "more quickly," I ask you, "Did you ever actually read the sites to begin with; how much time did you spend loading up the feed-URLs into your aggregator-bookmark file; and how much repetitive content do you have between the common aggregators?" [Are your eyes glazing over -- exactly what they want.]

    Lack of integration Current software and aggregators publicly available do not integrate feeds into a single easy-to-read platform like Google-news and you can customize Google-news searches.

    Invalid status messages Much time spent listing, without results. I see no time-savings generated when time spent "self-reporting a site" amounts to a confirmation message of "you're listed," but a later-check cannot find the "listed" site.

    Duplication There exists no method to screen out the duplication across platforms. It's no "time savings" to have similar content multiply-listed. Google permits content stacking--similar stories are grouped. Aggregators do not do this.

    The ugly secret How much time do corporations spend re-sending data that cannot be taken via XML? We have yet to have credible independent tests demonstrating that time is saved. How much of your contract effort is going to pay for a "back-up system" that is actually the primary? Oh, we can't talk about that.

    Returns on advertising Need to see something related to the real content they're looking at. I'm not convinced the "eyeballs on the feed" metrics amount to much [for purposes of estimating the potential advertising pools/revenues]. Again, need to look at returns-on-advertising, not simply eye-balls and dollars spent on ads.

    Claim 7: XML brings the sites to you.

    No change. Bookmark links also "brought the site to you."

    XML is no different, or is this as opposed to the "old" web where you had to go look for things? Still have to look for things on the aggregator, which are no different than a professional link-archiving systems.

    Further, other "bring the web page to you" systems mysteriously disappeared, yet XML-RSS self-reports "it is different". How? No answer.

    Claim 8: You can syndicate easily.

    Strike "easy." That is if you forget the errors, time spent, and the failures to correctly report listing status. Plus, even if you're syndicated, so what. This doesn't mean anyone is going to read it.

    How much of your money is going to fund a developer's late-nite surfing over actual product development? If they're blogging "about other stuff", they're not developing.

    Claim 9: It beats the alternatives.

    What? Using Google, bookmarks or other search engines? At least I know when I use a search engine that I've got something.

    With an aggregator, I can't get listed sites to search; nor can I be confident that the sites that have been self-reporting as "pinged" have actually saved the content in the archive.

    Claim 10: It's a great platform.

    For whom? The developers looking for another "pet project" to shop to the street? They've got endless error reports coming in, same errors, and no closure.

    At worst, they do not support "well-supported" platforms. Why is money being spent to support efforts that will outline, catalog, and organize error reports for systems that the developers have no plans to ever integrate with or support? No answer from the developers.

    Claim 11: It's a whole new adventure.

    That's right. Venture capital is an adventure trying to get your money back. Hopefully you haven't mortgaged your home.

    XML Venture capital. How long will it take for the truth to surface? The reason they're promoting XML and RSS as "great" is that the smart money has already left.

    Don't be dumb. Thoroughly investigate anything related to RSS and XML.
  • Does the word "Enron" make your skin crawl? I suggest you get something for your stomach before continuing.

    RSS and XML have been touted as the "new" new-thing for the public to embrace. Caution is warranted. RSS is good for late-breaking news from sources that can't get an official platform. Then again, so is Google. So much for the RSS benefits.

    And a small problem for the RSS groupies--many of their claims don't stand up.

    Claim 1: The venture capital bridge funding is going into new products.

    Not much sense creating "consolidating mechanisms" like aggregators that perform no better than Google. At least Google is self-contained and self-supporting both in content and in software management. Aggregators require constant pruning.

    Further, there are too many holes in the existing RSS-XML platforms to credibly believe that the money is going to new-future products. It appears the money is going to back-fill the holes that were not completed the first time around.

    We have yet to have a good set of metrics for either the boards or independent auditors to ensure the RSS-XML backbone meets user-requirements. They've had plenty of conferences on standards, yet the developers still hate XML.

    Auditors need to check where that funding is actually going; a good story on why those requirements have not already been completed; and how the "new requirements" are going to get completed when the funding is exhausted on the unfinished work.

    Claim 2: There's a huge market

    Let's also consider the time spent verifying claims. On one hand we have a simple validator that reports an "error," yet this error is irrelevant--the feed can still be read.

    Also, the same site reports than less than 2,000 have used the site. Wow, so we have 2,000 people on a planet of 6.2B that have a valid feed. I'm not impressed. It's been how long since RSS introduced? Oh, that's right--the RSS user's group got cancelled because of lack of participation.

    Auditors need to get a good understanding of the returns on advertising. Make some credible arguments that the "model that failed under the web/dot.com bubble is going to magically be solved under RSS."

    I see nothing but handwaving and Genies. Where's Major Healey?

    Claim 3: This has been audited by outside investment bankers.

    How much money do they have in the firms they're covering? They don't audit, they professionally market things.

    A real audit would look into the number of recurring error reports at a given syndication site and ask, "What's the plan to close out these reports" and inquire into the "why do we have confirmation messages coming back that the site is listed, nothing in the ping-reports that say there is an error, yet the site is not actually listed for subscription?"

    We don't have these kinds of answers. Who knows what kinds of questions the boards of directors and audit committees "should" be asking under SarBox, but their eyes are glazed over as they were in the dot-scam mania of the 1990s.

    Claim 4: The aggregators do all the work for you.

    The aggregators are just glorified bookmarks. They don't actually do any work. At best, the aggregators simply create islands of content that outside search engines cannot penetrate. So much for the "seamless" web, and "hello" retro-grade internet development.

    At worst, the aggregators perform worse than a simple bookmark list. This is a waste of time to create this list, transfer the data, and then spend x-months believing you're working with something better that is actually no better.

    Congratulations on creating a mechanism to delay awareness of reality.

    Claim 5: It's really easy to review many sites.

    Wrong. It's more difficult. The current aggregators create the false expectation that things are easy. Actually, there are holes and a personal-book-mark-list works just fine.

    It's absurd to suggest the "old web" could not be reviewed easily. Hello, can you say, "Links"? At least the "old web" could be searched, unlike the current aggregators do not expedite reading sites as one simply has to reload-the-feeds one by one.

    There's no advantage to having an aggregator over having one privately list the site URLs. At least private bookmarks allow credible-reliable searching.

    The existing aggregators have proven themselves wanting, inter alia.
  • Listed-sites with valid-feeds do not appear in the aggregator searches;
  • Valid-key-word searches for known content on the web produces no hits;
  • Content that has reported pinged does not get archived; and
  • There is not method to backfill the content-holes.

    Claim 6: RSS saves time

    Wrong. The feeds when complied simply create larger piles of sites to "not be able to get to." It takes time to re-load, re-check these URLs. That's time that could be spent actually reading content.

    At best, RSS simply creates a new set of rules to confuse people and avoid credible outside oversight. Another delaying tactic. Indeed, the developers despite the confusion and lack of pointed questions still can't get their act together.

    The community has a disjointed XML-backbone, the developers hate it, there are known holes in the system, no credible leadership to address these issues, and all the while the talk about how great it is. Can you say, "Which pod am I hiding the pea under?"

    It's a shell game. The more shells, the greater the difficulty for oversight, workpackage-traces, and credible progress reports.

    I remain skeptical of reports that individual web-surfers can surf hundreds of sites faster. We need some specific names, something more than a website touting this conclusory fact, and some real data to justify the conclusion.

    Got time to kill? Then you don't need feeds. To get listed, you need a full weekend to load 100 feed URLs, document the report-back-messages, and then verify the messages actually generated an accurate status-message. Hello, "logic disconnect" that the Street loves to exploit.

    Other questions For those who claim they can read 100s of RSS-feeds "more quickly," I ask you, "Did you ever actually read the sites to begin with; how much time did you spend loading up the feed-URLs into your aggregator-bookmark file; and how much repetitive content do you have between the common aggregators?" [Are your eyes glazing over -- exactly what they want.]

    Lack of integration Current software and aggregators publicly available do not integrate feeds into a single easy-to-read platform like Google-news and you can customize Google-news searches.

    Invalid status messages Much time spent listing, without results. I see no time-savings generated when time spent "self-reporting a site" amounts to a confirmation message of "you're listed," but a later-check cannot find the "listed" site.

    Duplication There exists no method to screen out the duplication across platforms. It's no "time savings" to have similar content multiply-listed. Google permits content stacking--similar stories are grouped. Aggregators do not do this.

    The ugly secret How much time do corporations spend re-sending data that cannot be taken via XML? We have yet to have credible independent tests demonstrating that time is saved. How much of your contract effort is going to pay for a "back-up system" that is actually the primary? Oh, we can't talk about that.

    Returns on advertising Need to see something related to the real content they're looking at. I'm not convinced the "eyeballs on the feed" metrics amount to much [for purposes of estimating the potential advertising pools/revenues]. Again, need to look at returns-on-advertising, not simply eye-balls and dollars spent on ads.

    Claim 7: XML brings the sites to you.

    No change. Bookmark links also "brought the site to you."

    XML is no different, or is this as opposed to the "old" web where you had to go look for things? Still have to look for things on the aggregator, which are no different than a professional link-archiving systems.

    Further, other "bring the web page to you" systems mysteriously disappeared, yet XML-RSS self-reports "it is different". How? No answer.

    Claim 8: You can syndicate easily.

    Strike "easy." That is if you forget the errors, time spent, and the failures to correctly report listing status. Plus, even if you're syndicated, so what. This doesn't mean anyone is going to read it.

    How much of your money is going to fund a developer's late-nite surfing over actual product development? If they're blogging "about other stuff", they're not developing.

    Claim 9: It beats the alternatives.

    What? Using Google, bookmarks or other search engines? At least I know when I use a search engine that I've got something.

    With an aggregator, I can't get listed sites to search; nor can I be confident that the sites that have been self-reporting as "pinged" have actually saved the content in the archive.

    Claim 10: It's a great platform.

    For whom? The developers looking for another "pet project" to shop to the street? They've got endless error reports coming in, same errors, and no closure.

    At worst, they do not support "well-supported" platforms. Why is money being spent to support efforts that will outline, catalog, and organize error reports for systems that the developers have no plans to ever integrate with or support? No answer from the developers.

    Claim 11: It's a whole new adventure.

    That's right. Venture capital is an adventure trying to get your money back. Hopefully you haven't mortgaged your home.

    XML Venture capital. How long will it take for the truth to surface? The reason they're promoting XML and RSS as "great" is that the smart money has already left.

    Don't be dumb. Thoroughly investigate anything related to RSS and XML.
    " />
  • posted by Mud's Tests at 16:15
    | [?]