19 January 2005

PubSub's credibility problem: The recurring message showing they have problems

Ref. The link shows the message we continue to get using PubSub's instructions.

At this juncture, it is clear that PubSub has a problem that both their management and coders are either not aware, or cannot resolve.

Surprisingly, management appears to assert, without data or incorporation by any reference that "there is no problem with PubSub" and "there are no delays."

Why, now 96 hours after initiating the search strings, do we have no responses in both PubSub and the aggregators using PubSub feeds?

I'm sure there's a perfectly logical explanation. Yet, despite raising this point quite some time ago, we continue to get the same: "There is no problem."

OK. Let's play a game. It's called, "Prove it."

Some have asked why the concern. Well let's take a look at the fine Morgan Stanley Analyst report. Check page eleven [11]; Ref

Take a look at the two exhibits on page 11: Exhibits 10, 11:

  • See "Exhibit 10, The News Curve: How a News Story Evolves Over Time"
  • "Exhibit 11: The Content Curve: How are Viewers
    Distributed Over Sources"
  • PubSub's problem is that it's code is generating search-out put that is internally inconsistent. This is another way of saying: The left and right hand within PubSub are not coordinated. One part of the platform does one thing; while the other part does something else.

    Again: One part of the PubSub platform works fine; the other part produces output-messages that are not consistent with the other half that works.

    From this vantage point, it doesn't matter "what is causing the problem" -- that's a management-developer issue.

    But what I do know is that I cannot reliably go to PubSub and get a subscription result. Perhaps I am the only one on the planet who has this many error messages.

    The problems with PubSub's delays and recurring messages are this: This shows the internet is not only narrow, but with PubSub, you’re late ... worst of both worlds. That's well on the right side of the skewed-curves.

    Fine. Then the world needs to ask, "Is there something within PubSub that is not stable; or are there instructions and commands that are prompting users to do something that we cannot replicate?"

    Again, it is not my place to tell the developers "what is wrong," as they are the experts with their system. I do not sit here with the expectation that CEOs of major 500 companies show up on my doorstep to ask me how to use their products. They make products for me to use; and I need not guide them how to use their product, or instruct them on the instructions or procedures.

    However, with respect to PubSub and the Morgan Stanley report: If PubSub cannot create a platform that reliably generates output, then we have a lag time.

  • If I get a message saying, "There is nothing" ... while another part of the system is saying, "Here is content" for the same search that is not a user problem. That is a PubSub-generated problem: Same search, different result; Same request, one has a lag, the other produces a result.

    With respect to the Morgan Stanley report: If I have a delay in getting feedback, how do I know what the error is? How do I know "what the fix is"? All I know is that as the event passes....the usefulness of my content falls dramatically. And if PubSub isn't delivering me content, how many others have the same problem: They rely on something that isn't working as they think it is.

    Objective to be fast

    If the end-user has to wait 96 hours, and "still get nothing," there is no prospect that I can believe "if I write about something" that it will timely appear for someone else to read while the event if fresh. It makes no sense to discuss in a blog something that is a week old, as events have continued to make my comments irrelevant.

    Incentive to delay

    More bluntly, if an event breaks, and subscribers then start their subscription, but they 'get the same kind of mode I am getting" [message: We have nothing], then it means that if I am fast in posting a response, those users who then start their suscription are going to miss my content because I am too fast. I, as a blogger, would be "smart" to delay.

    Recall, the goal of IT and XML is to create products that allow people to continue "doing what they do" and "work with how they do things." In my view, we have a problem when an apparent delay and flaw in a platform is driving people to delay providing content because "the other people requesting submissions" may create a request that only is ready-activated-ready to work after my content is old-news.

    Tsunami: Emergency situations

    It doesn't say much when end-users, when following cellular directions, have to wait for something; and it's also something the other bloggers need to know: There may be times when you blog "too fast" because the platforms are not ready to report your data until well after you publish. In that universe, its a waste of time to expect your publication and content to get widely disseminated: People could very well "only get a valid subscription ready to find content" well after the initial reports. Think about what happened in the wake of the Tsunami.

    The initial reports were critical. Who had prior to the even the needed search strings already loaded? I can think of nobody. SO, if they have to then start a subscription after the event, they will miss the initial content because the platform is not ready to provide that content.

    Instructions

    Moreover, if other people out there are doing the same thing and getting error messages saying "there is nothing" [but there actually is], then what use is it to blog; but more broadly, what use is it to rely on an aggregator, read content, and use time to review material?

    Bluntly: If PubSub cannot explain the lagtime [that is actual], then the world has a problem with the aggregator-with-advertising concept. Why would I as a end-user want to bother using XML RSS Atom when the search-codes I have access to only link me to messages that say, "We have nothing."

    I don't care why it's happening. I have other things I can do. It's called read the newspaper. Listen to the radio.

    I can shut off the computer. Tell the ISP, "thanks, but there's no use in using you because you have too many bugs; there are too many viruses; and these new tools that people are making don't reliably work."

    I don't need to wait 96 hours to be told, "We have nothing." I can go to the corner newspaper bin and read the headlines through the bin-window.

    If that is what the XML community wants, great. You're well on your way to getting a vote of no confidence. Another one.

    Again, the issue resides within PubSub. There are delays. And I have heard nothing that suggests that we have seen any data that would back up the assertion, "there is no problem." Rather its thrown back on the public to explain "what they did," or "the user has a problem>"

    Nope. Not this time. You have coders. You have experts. You have professional code analyzers. Get them to figure it out.

    Something like this should not require this much effort to timely resolve, to address, and get dealt with. I can't get them to listen. Yet, if it was complicated, I think it would've already been discovered, reported, and fixed.

    Again, it is PubSub's system that is returning the message. What PubSub doesn't know is what is causing the problem, why the "there is nothing" message continues.

    Inconsistent results

    Here's the surprise. We entered the same search strings within PubSub's own system, and have produced inconsistent results. Again, PubSub needs to explain why their system, with very clear directions does not provide consistent output; and the content in one search doesn't match another search.

    Bluntly: It makes no sense to "load up the same search string" and in one, get nothing for 96 hours, and in the other a different result.

    Oh, and for those of you who want an example: Congratulations, your "lack of ability to find the test report" shows the limitations of PubSub. You see, we used PubSub to ping and publish the report -- so if something is "suddenly of interest" or "there is news here you didn't know about," you need to ask yourself: How many other reports are you missing; where is the report; and why are you only finding out about it now?

    The report was pinged. The site was updated. There is no problem with blogger. The issue is that the information when it is "out there" [for whatever reason] doesn't get picked up.

    Who cares? Well, PubSub and the other fine developers working on FeedMesh want to do some integration. Great. Let's see some integration within a single platform, much less across multiple platforms.

    If you disagree with what I am saying and have data that says "there is nothing wrong" and "there is no way this could be happening" ... feel free to post it.

    Otherwise, when I hear the word, "PubSub," I think one thing: A platform that doesn't know why it has delays; a team that asserts it is great but will not show the exception reports; and a platform that wants the world to believe one thing, but they have nothing to backup what they are saying.

    Exhibits

    Here is the repeat message indicating there is nothing, and showing there is a delay. We are going on 96 hours, and there's nothing.
    There are no messages in this digest-- yet.
    There are no results that match your subscription-- yet.
    Here are the detailed steps taken to achieve the "we have nothing" message. If you want to duplicate the test, feel free; if you have different results, fine. However, if you want to challenge the test, then come up with some data to explain why there are different test results across platforms.

    That's all I'm getting. Why? Nobody will give an answer; but they keep saying, "There is no delay." Well, if the code is wrong, and PubSub is internally inconsistent and provides this error message, what do you want to call it other than a delay?

    David Sifry on the Jeremy Zawodny blog said:
    "The rate at which the blogosphere is growing combined with our growth rate has been a tremendous challenge and opportunity."
    At this juncture, it appears as though many want to believe that PubSub is a reliable platform to use as a baseline to evaluate other products. Fine. Go ahead. Ignore this message.

    Know that there is an internal problem on the PubSub platform that generates these types of messages. I'm not clear how you can credibly use PubSub as a baseline when the "calibrated" system produces results which management does not appear to understand, accept, recognize, or embrace as something their product created.

    But I do have some questions that others may choose to evaluate or look into. For those who cannot read between the lines, these are questions that internal auditors using SAS99 may choose to consider along with their discussions with outside directors and auditors:

    Testing

    When a product is stress tested and outside parties are used, there are generally test plans and procedures. Usually what happens is the platform is slapped against criteria and stress tested. At this point there are several issues with the test.

  • Were there any outside, contract efforts used to conduct the test?

  • How was it determined that the "final test as conducted" matched the test plan?

    Integration

    When a platform is created, it remains to be understood how various sections of the same platform are integrated. It is problematic when one part of the platform produces a result not consistent with management public statements.

    In this case management appears to have the position that there is no delay. We remain unclear what basis management is using to assert this position; what data they use to arrive at this position; or what level of confidence they have in that position.

    - How was integration managed?

    - What types of meeting did they have?

    - How were integration issues identified, documented, resolved, and closed?

    - How were post-deployment issues dovetailed with integration data to show that the integration plans worked?

    - In those areas where there were discrepancies, how were these incorporated into other efforts outside PubSub software development?

    - What method did management use to incorporate the results?

    Subcontract management

    Outside contractors may have been used to conduct detailed tests. This is not unusual.

    If management's current position, despite the error messages, is that "there is no delay," this could be correct from the perspective of a subcontractor that has responsibility to code, test, develop, and integrate a section.

    Again, we make no comment on the integration matters. However, the issue becomes: "Because there are recurring messages indicating a delay in response that are not matching what upper management represents, is there a subcontract effort that relies on a favorable progress report?"

    Moreover, if there are outside contractors who have the responsibility to self-report, it remains unclear how upper management then went back to sample or do a complete review of the effort.

    This is not to say that a particular subcontractor has a specific problem, only to reconcile the issue: Why does the sub-element within the platform not perform at the status management continues to represent; and what relationship does this have, with any, subcontract management?

    - Did any subcontractors self-report reporting and testing results?

    - Was sampling of the test plans and reports 100% or some percentage?

    - How was it determined that this %-selected was appropriate?

    Human factors testing

    One approach to integrating and developing a robust platform is to create something that works. This is a no brainer. With time, bugs are worked out. This is called beta testing. And during deployment there are other issues that are resolved.

    Welcome to present time. PubSub is a mature system. It has a proven track record. There are over 6M subscriptions-sites-and-feeds that the system tracks, manages, pings, searches, and provides responses to end-users.

    Brilliant. The concept is wonderful. However, let us consider the details when actually using the system.

    If there is a failure mode that is new, then that needs to be understood and duplicated elsewhere. The problem we run into is when working with a mature platform like PubSub, but we are X-years into deployment and we continue to get a massive number of "nothing is here" messages, then it looks as though there's a credibility problem with the human factors testing.

    It's not reasonable to believe that after 6M hits-uses-pings-subscriptions-sites used [focus on the 6M-number, alot], that all of a sudden out of the blue someone shows up and says, "It doesn't work."

    The poor answer is to say, "The end-user is making an error." The proper approach is to review the human-factors tests, evaluate what was tested, and then compare it with the current instructions-directions provided to end users and say, "Is there something that we have tested, that the users are not doing; and is there something we are telling the users to do, that we did not test."

    I can't answer that. I have no visibility to the failure modes, stress testing, and human factors procedures incorporated into the process. Nor am I part of the decision process to translate codes into instructions; and then take information form the user to create a response.

    All I have is the instructions and the output. The instructions are simple. They are straight forward. They are clear. PubSub should be commended for having a simple platform, that is clean, and provides simple instructions.

    The problem we run into is if those instructions are "so simple," why the multiple messages indicating there is "nothing there," even though other searches, with the same search terms, provide a valid output?

    If I am the only one of this planet who is able to achieve the "there is a problem," then it seems reasonable to ask:

  • How many others have gotten the same message and didn't realize it?

  • How many people got a "we have no response"-type message and thought it was a normal delay?

  • How many people got a "we have nothing message" and think the problem lies with their aggregator?

    At this juncture, given I have received this many "we have nothing messages," and it is going on 96 hours since we started, I ask the question: Is it possible that this has been going on since day 1, but nobody has done a 100% test on all procedures until now?

    Again, if there is data showing that someone has done 100% testing on all steps, and there is no way to duplicate these results, I invite PubSub to disclose that information, provide a clear statement, "We have done 100% testing on all steps, and there is no way to recreate this error."

    At that point, once that public certification is made, and it is done publicly, clearly, and equivocally, then the world will be in a position to ask: "Why do we keep getting this error message despite the 100% certification of task accomplishment?"

    There is something which PubSub has missed. There is something on their platform that is not consistently producing output. When they created their clear instructions, they asked the public to do something, but the coders know to do something else, or the coders are actually doing something slightly different than their instructions imply.

    I encourage PubSub to conduct that test publicly. To show that the test platform works. Then create the needed baseline data to justify confidence.

    At this point, I have no data from PubSub. What I do have is 96 hours of nothing, no clear explanation, and management apparently suggesting that "there is no delay," without any explanation why I have nothing on these search strings.

    Who do I believe? My results, or management statements.

    I have to go with what I have, not with what management is saying. If you have another view on this, you're free to blog and look into the matter. Run your own test. DO the steps. See if you, when carefully following their instructions, arrive at an output and result.

    Fine! But if you want to talk about "there being no problem," but provide no basis to believe that assertion, and give me something that will say, "This is what you need to do," or "these instructions are bad" or "don't do this," I have no thing else to go on.

    Again, human factors integration is an issue if, this many years into the PubSub deployment, I show up, follow the instructions, and arrive at a conclusion that is new, different, unexpected.

    I continue to get the same results while following PubSub's instructions. Same messages. Same output.

    If I'm the only one getting this message, then I challenge PubSub to show that their human factors testing actually did all possible steps in the instructions; and that they have with 100% confidence demonstrated that all instructions, when followed to the letter, have generated a favorable result.

    AT this point, I have my doubts. I suspect that someone has reported a test was done, but did not actually do it. I also suspect that someone has stated they completed the integration testing without looking at their instructions and "doing it like an end-user does."

    This does not reflect well on how the human factors engineering was both designed, tested, and resolved.

    - Did Human factors get addressed in the planning?

    - What method was used to demonstrate that user-end issues were incorporated into the design?

    - What method was use to reconcile the trace between code, requirements, instructions, and error messages?

    - What sampling did management do of end-users to identify "whether the product and information provide" actually achieved the desired result: From the perspective of the end-user, and then go back and look at the goal in re requirements and platform objectives?

    - What tests did PubSub management do of their outputs, responses, and human-interface issues did management take to ensure that their product as deployed is actually meeting the intended XML-objectives of providing easy access to content?

    - How were the test results incorporated into updates, responses, and problem resolution within the existing platform?

    Problem resolution

    Let's turn to the final step of this deployment process. Understandably after many months of trial runs, outputs, and favorable messages, suddenly someone shows up and generates a "new message."

    Yet, "this new message" is what PubSub generates. Following their procedures. I also documented the error, test mode, and messages with unfavorable results. I have yet to hear a credible explanation of "What went wrong" or "how this was fixed" or "what we have corrected."

    in short, in my view, a valid report was issued, published, made public, yet the unfavorable news was not timely resolved.

    Again, the "we have nothing" message continues after the initial message, report, and opportunity to resolve the issue. If there's a solution, let's hear it; if there is no way to duplicate what the PubSub instructions say, then so state:

    "Mud's Tests: We are unable to duplicate your error mode. There is something we do not understand with our own system. Please help us."
    Yet, I have gotten the opposite: "There is no problem" and "there is no delay." Congratulations. PubSub is generating a "we have nothing message report" and wants the world to believe "there is no delay." It's been 96 hours. I have nothing. That is a delay.

    All I have are the error messages. All I have is what PubSub is giving me.

    - Is there any evidence that responses and concerns raised were ignored, or that feedback was discouraged at critical times and milestones?

    - what impact did an SEC-imposed IPO-related "quiet period" have on public discussions of publicly available error modes; did the quiet period adversely affect management's ability to timely respond to inquiries about the platform and timely respond to questions about delays?

    Summary Review Questions

    - Review the human factors testing. Ask management to demonstrate that all failure modes have been tested with negative results. Look for documents that demonstrate the test requrirements match both the system requirements and the advertised claims on the response time.

    Review the test plan. Ensure that the test approach matches the instructions. Identify those areas within the test plan that do not match the instructions.

    Look for documents outlining a clear trace between the requirements, plan, and test results. Look for detailed comments on how deviations were reconciled. Inquire if there are gaps or missing gaps in the steps.

    Review the subcontract effort. Identify those subcontractors that created sub-elements; and the basis for them to assert 100% compliance; and how the change log was incorporated into revisions. Look for award fees paid to the subcontractors; identify any financial incentives paid, what performance standard was reported to have been achieved, and look for documentation that shows there was data and evidence substantiating these award fees paid.

    Notice whether the accounts send funds to bonafide contractors that can be independently identified, and have a track record of performance in similar efforts. Notice whether the progress and results compare favorably or unfavorably with the existing effort on PubSub.

    Review integration. Identify how it was shown the various elements were brought together; and that each element consistent did what all the other elements do. Ask to see the detailed code logs which each team would have to develop; and ask the team to reconcile how the various parts were brought together in a manner whereby each element was consistently certified.

    Look at the %-sampling of each section; was it 100% review, or were there only samples? Look for documentation that shows there is a plan in place to incorporate requirements; ensure coding efforts meet those requirements; test plans verify those requirements have been met; and all failure modes have been identified, resolved, and workarounds incorporated.

    Review code certification process. Identify the method to certify that all the test objectives, results, and failure modes adequately matched what was originally identified as a requirement.

    Look for a summary table that is signed and certifies, "There is no problem with delays." If management doesn't want to provide a timely response to this document, inquire why they publicly state one performance level, but are unable or unwilling to provide documentation that certifies this standard has been met, demonstrated, and continues to be met on an ongoing basis.

    Implications

    Whether PubSub goes IPO is irrelevant. A major news corporation like CBS just lost their chief reporter, Dan Rather.

    Bloggers can find things. It remains to be seen whether John Stewart is asked to come in to become the new PubSub CEO. He calls it like it is. He's the one that can see through the non-sense.

    At this point, I have waited 96 hours, have no explanation why this is happening, and continue to hear repeated assertions that "there is no delay."

    If I was watching TV, I might imagine that I'm watching Crossfire. That show got cancelled. And there is similarly no requirement that a similar dispute cannot be similarly resolved.

    The world is eagerly awaiting PubSub's resolution to this issue. Again, what PubSub has or has not done remains to be understood. If you want something answered, direct your attention to PubSub to explain their system, and make them produce the data that backs up what they are saying: "There is no delay."

    Show me. Prove to me "the results I got" do not exist. Prove to me that PubSub has a product that creates something that works, reliably, and without outside support.

    At this juncture, it is clear the instructions say one thing, but what everyone else is doing is something else.

    There is a failure mode which PubSub does not understand. And it has been this many years into deployment and this is news. It remains to be understood how many people have also gotten the same failure mode, or have been thinking there is "no problem" when they actually have a delay.

    In my case, I have no delay. I have "nothing."

    Going forward, if we are to believe that PubSub will form the calibration for FeedMesh, or that FeedMesh is "on track", then there is much to be desired. In this situation, we have a single platform producing inconsistent results. These challenges have been met with "There is no problem" and does not bode well for the prospects of integration testing with FeedMesh.

    Let us hope the developer-community as a whole can come to an understanding of what appears to not be working so that these lessons learned can be applied to FeedMesh.

    Summary

    PubSub management continues to assert "there is no delay." Fine. Show me. Prove it. Show me aggregate date that backs up this claim.

    If there is no data to "back up the claim that there is no delay," then say so. Say, "We think there is no delay, but have no data to back up our system."

    I will not accept the, "The end user made an error." PubSub's instructions are clear. Their steps are simple. Their clear design is stellar.

    There is one small problem. It does not execute reliably, and the public when following these clear instructions are not able to consistently generate a reliable result.

    There are delays. PubSub doesn't understand why. It appears management wants to come up with all sorts of excuses.

    Find someone else. The President of the United States lied about WMD because "it was the right thing." I'm not calling Bob Wyman a liar. But I am calling him someone who is asserting something that there is no data to support:

    - There is no data to justify the conclusion, "There is no problem."

    - There is no data to justify believing, "There are no delays."

    - There is no basis to believe that the platform has no problems.

    How do I know this?

    This is what I have done: Run the test, got the messages, and the response is not consistent with what I am being told "is supposed to happen."

    Congratulations. This x-many years into deployment, we have new information and it appears as though management is sticking by their guns. They continue to assert something that I am not able to duplicate. I keep getting the "we have nothing" status messages.

    That is a delay. Not simply in getting the information, but in how management is resolving this issue. More of the "it's not us" and "it's somewhere else."

    I don't believe it. In my opinion, management has accepted the lower-level assertions of someone without checking the results. And this late in the game, someone has said, "We did all that," but they really didn't.

    That's important for directors, Wall Street, and outside investors to evaluate. It appears as though PubSub is saying "There is no problem," but there is, they don't know why, and don't want to/are unable to deal with the issue.

    It would be a shame if a fine company like PubSub ended up having problems when issues of integrity, financial reporting, and regulatory requirements simply because they said, "There is no problem" when the independent data showed otherwise.

    Notice how they resolve the issue. Once they go IPO, it's only going to get worse: The delays. The creative excuses. And the handwaving.

    PubSub has a problem with their platform. I have been getting error messages for 96 hours. I am not getting outputs. I am getting delays. Excuses.

    I am not impressed. And I question those who sing praises for something that, from my perspective, doesn't work.

    The Morgan Stanley report was instructive. I'm glad I reviewed the details myself. Page 11. Exhibits 10 and 11. That's what this is all about. If you can't reliably deliver content using simple instructions, then what confidence should advertisers have to use XML as an advertising platform?

    At this point, my eyeballs, if they could look at the XML-related ads, cannot see those ads. PubSub doesn't know why.

    I don't care why. But the advertisers who spend alot of money paying for advertising would ask, "What is the issue" and "why all these messages?" I have no answer. PubSub has provided no data, and I'm being told, "there is no problem."

    In my opinion, advertisers and investment bankers might want to hold onto their dollars until this issue is understood and timely resolved.

    If you wish to proceed, you've been warned. Buyer beware!
  • Ref. The link shows the message we continue to get using PubSub's instructions.

    At this juncture, it is clear that PubSub has a problem that both their management and coders are either not aware, or cannot resolve.

    Surprisingly, management appears to assert, without data or incorporation by any reference that "there is no problem with PubSub" and "there are no delays."

    Why, now 96 hours after initiating the search strings, do we have no responses in both PubSub and the aggregators using PubSub feeds?

    I'm sure there's a perfectly logical explanation. Yet, despite raising this point quite some time ago, we continue to get the same: "There is no problem."

    OK. Let's play a game. It's called, "Prove it."

    Some have asked why the concern. Well let's take a look at the fine Morgan Stanley Analyst report. Check page eleven [11]; Ref

    Take a look at the two exhibits on page 11: Exhibits 10, 11:

  • See "Exhibit 10, The News Curve: How a News Story Evolves Over Time"
  • "Exhibit 11: The Content Curve: How are Viewers
    Distributed Over Sources"
  • PubSub's problem is that it's code is generating search-out put that is internally inconsistent. This is another way of saying: The left and right hand within PubSub are not coordinated. One part of the platform does one thing; while the other part does something else.

    Again: One part of the PubSub platform works fine; the other part produces output-messages that are not consistent with the other half that works.

    From this vantage point, it doesn't matter "what is causing the problem" -- that's a management-developer issue.

    But what I do know is that I cannot reliably go to PubSub and get a subscription result. Perhaps I am the only one on the planet who has this many error messages.

    The problems with PubSub's delays and recurring messages are this: This shows the internet is not only narrow, but with PubSub, you’re late ... worst of both worlds. That's well on the right side of the skewed-curves.

    Fine. Then the world needs to ask, "Is there something within PubSub that is not stable; or are there instructions and commands that are prompting users to do something that we cannot replicate?"

    Again, it is not my place to tell the developers "what is wrong," as they are the experts with their system. I do not sit here with the expectation that CEOs of major 500 companies show up on my doorstep to ask me how to use their products. They make products for me to use; and I need not guide them how to use their product, or instruct them on the instructions or procedures.

    However, with respect to PubSub and the Morgan Stanley report: If PubSub cannot create a platform that reliably generates output, then we have a lag time.

  • If I get a message saying, "There is nothing" ... while another part of the system is saying, "Here is content" for the same search that is not a user problem. That is a PubSub-generated problem: Same search, different result; Same request, one has a lag, the other produces a result.

    With respect to the Morgan Stanley report: If I have a delay in getting feedback, how do I know what the error is? How do I know "what the fix is"? All I know is that as the event passes....the usefulness of my content falls dramatically. And if PubSub isn't delivering me content, how many others have the same problem: They rely on something that isn't working as they think it is.

    Objective to be fast

    If the end-user has to wait 96 hours, and "still get nothing," there is no prospect that I can believe "if I write about something" that it will timely appear for someone else to read while the event if fresh. It makes no sense to discuss in a blog something that is a week old, as events have continued to make my comments irrelevant.

    Incentive to delay

    More bluntly, if an event breaks, and subscribers then start their subscription, but they 'get the same kind of mode I am getting" [message: We have nothing], then it means that if I am fast in posting a response, those users who then start their suscription are going to miss my content because I am too fast. I, as a blogger, would be "smart" to delay.

    Recall, the goal of IT and XML is to create products that allow people to continue "doing what they do" and "work with how they do things." In my view, we have a problem when an apparent delay and flaw in a platform is driving people to delay providing content because "the other people requesting submissions" may create a request that only is ready-activated-ready to work after my content is old-news.

    Tsunami: Emergency situations

    It doesn't say much when end-users, when following cellular directions, have to wait for something; and it's also something the other bloggers need to know: There may be times when you blog "too fast" because the platforms are not ready to report your data until well after you publish. In that universe, its a waste of time to expect your publication and content to get widely disseminated: People could very well "only get a valid subscription ready to find content" well after the initial reports. Think about what happened in the wake of the Tsunami.

    The initial reports were critical. Who had prior to the even the needed search strings already loaded? I can think of nobody. SO, if they have to then start a subscription after the event, they will miss the initial content because the platform is not ready to provide that content.

    Instructions

    Moreover, if other people out there are doing the same thing and getting error messages saying "there is nothing" [but there actually is], then what use is it to blog; but more broadly, what use is it to rely on an aggregator, read content, and use time to review material?

    Bluntly: If PubSub cannot explain the lagtime [that is actual], then the world has a problem with the aggregator-with-advertising concept. Why would I as a end-user want to bother using XML RSS Atom when the search-codes I have access to only link me to messages that say, "We have nothing."

    I don't care why it's happening. I have other things I can do. It's called read the newspaper. Listen to the radio.

    I can shut off the computer. Tell the ISP, "thanks, but there's no use in using you because you have too many bugs; there are too many viruses; and these new tools that people are making don't reliably work."

    I don't need to wait 96 hours to be told, "We have nothing." I can go to the corner newspaper bin and read the headlines through the bin-window.

    If that is what the XML community wants, great. You're well on your way to getting a vote of no confidence. Another one.

    Again, the issue resides within PubSub. There are delays. And I have heard nothing that suggests that we have seen any data that would back up the assertion, "there is no problem." Rather its thrown back on the public to explain "what they did," or "the user has a problem>"

    Nope. Not this time. You have coders. You have experts. You have professional code analyzers. Get them to figure it out.

    Something like this should not require this much effort to timely resolve, to address, and get dealt with. I can't get them to listen. Yet, if it was complicated, I think it would've already been discovered, reported, and fixed.

    Again, it is PubSub's system that is returning the message. What PubSub doesn't know is what is causing the problem, why the "there is nothing" message continues.

    Inconsistent results

    Here's the surprise. We entered the same search strings within PubSub's own system, and have produced inconsistent results. Again, PubSub needs to explain why their system, with very clear directions does not provide consistent output; and the content in one search doesn't match another search.

    Bluntly: It makes no sense to "load up the same search string" and in one, get nothing for 96 hours, and in the other a different result.

    Oh, and for those of you who want an example: Congratulations, your "lack of ability to find the test report" shows the limitations of PubSub. You see, we used PubSub to ping and publish the report -- so if something is "suddenly of interest" or "there is news here you didn't know about," you need to ask yourself: How many other reports are you missing; where is the report; and why are you only finding out about it now?

    The report was pinged. The site was updated. There is no problem with blogger. The issue is that the information when it is "out there" [for whatever reason] doesn't get picked up.

    Who cares? Well, PubSub and the other fine developers working on FeedMesh want to do some integration. Great. Let's see some integration within a single platform, much less across multiple platforms.

    If you disagree with what I am saying and have data that says "there is nothing wrong" and "there is no way this could be happening" ... feel free to post it.

    Otherwise, when I hear the word, "PubSub," I think one thing: A platform that doesn't know why it has delays; a team that asserts it is great but will not show the exception reports; and a platform that wants the world to believe one thing, but they have nothing to backup what they are saying.

    Exhibits

    Here is the repeat message indicating there is nothing, and showing there is a delay. We are going on 96 hours, and there's nothing.
    There are no messages in this digest-- yet.
    There are no results that match your subscription-- yet.
    Here are the detailed steps taken to achieve the "we have nothing" message. If you want to duplicate the test, feel free; if you have different results, fine. However, if you want to challenge the test, then come up with some data to explain why there are different test results across platforms.

    That's all I'm getting. Why? Nobody will give an answer; but they keep saying, "There is no delay." Well, if the code is wrong, and PubSub is internally inconsistent and provides this error message, what do you want to call it other than a delay?

    David Sifry on the Jeremy Zawodny blog said:
    "The rate at which the blogosphere is growing combined with our growth rate has been a tremendous challenge and opportunity."
    At this juncture, it appears as though many want to believe that PubSub is a reliable platform to use as a baseline to evaluate other products. Fine. Go ahead. Ignore this message.

    Know that there is an internal problem on the PubSub platform that generates these types of messages. I'm not clear how you can credibly use PubSub as a baseline when the "calibrated" system produces results which management does not appear to understand, accept, recognize, or embrace as something their product created.

    But I do have some questions that others may choose to evaluate or look into. For those who cannot read between the lines, these are questions that internal auditors using SAS99 may choose to consider along with their discussions with outside directors and auditors:

    Testing

    When a product is stress tested and outside parties are used, there are generally test plans and procedures. Usually what happens is the platform is slapped against criteria and stress tested. At this point there are several issues with the test.

  • Were there any outside, contract efforts used to conduct the test?

  • How was it determined that the "final test as conducted" matched the test plan?

    Integration

    When a platform is created, it remains to be understood how various sections of the same platform are integrated. It is problematic when one part of the platform produces a result not consistent with management public statements.

    In this case management appears to have the position that there is no delay. We remain unclear what basis management is using to assert this position; what data they use to arrive at this position; or what level of confidence they have in that position.

    - How was integration managed?

    - What types of meeting did they have?

    - How were integration issues identified, documented, resolved, and closed?

    - How were post-deployment issues dovetailed with integration data to show that the integration plans worked?

    - In those areas where there were discrepancies, how were these incorporated into other efforts outside PubSub software development?

    - What method did management use to incorporate the results?

    Subcontract management

    Outside contractors may have been used to conduct detailed tests. This is not unusual.

    If management's current position, despite the error messages, is that "there is no delay," this could be correct from the perspective of a subcontractor that has responsibility to code, test, develop, and integrate a section.

    Again, we make no comment on the integration matters. However, the issue becomes: "Because there are recurring messages indicating a delay in response that are not matching what upper management represents, is there a subcontract effort that relies on a favorable progress report?"

    Moreover, if there are outside contractors who have the responsibility to self-report, it remains unclear how upper management then went back to sample or do a complete review of the effort.

    This is not to say that a particular subcontractor has a specific problem, only to reconcile the issue: Why does the sub-element within the platform not perform at the status management continues to represent; and what relationship does this have, with any, subcontract management?

    - Did any subcontractors self-report reporting and testing results?

    - Was sampling of the test plans and reports 100% or some percentage?

    - How was it determined that this %-selected was appropriate?

    Human factors testing

    One approach to integrating and developing a robust platform is to create something that works. This is a no brainer. With time, bugs are worked out. This is called beta testing. And during deployment there are other issues that are resolved.

    Welcome to present time. PubSub is a mature system. It has a proven track record. There are over 6M subscriptions-sites-and-feeds that the system tracks, manages, pings, searches, and provides responses to end-users.

    Brilliant. The concept is wonderful. However, let us consider the details when actually using the system.

    If there is a failure mode that is new, then that needs to be understood and duplicated elsewhere. The problem we run into is when working with a mature platform like PubSub, but we are X-years into deployment and we continue to get a massive number of "nothing is here" messages, then it looks as though there's a credibility problem with the human factors testing.

    It's not reasonable to believe that after 6M hits-uses-pings-subscriptions-sites used [focus on the 6M-number, alot], that all of a sudden out of the blue someone shows up and says, "It doesn't work."

    The poor answer is to say, "The end-user is making an error." The proper approach is to review the human-factors tests, evaluate what was tested, and then compare it with the current instructions-directions provided to end users and say, "Is there something that we have tested, that the users are not doing; and is there something we are telling the users to do, that we did not test."

    I can't answer that. I have no visibility to the failure modes, stress testing, and human factors procedures incorporated into the process. Nor am I part of the decision process to translate codes into instructions; and then take information form the user to create a response.

    All I have is the instructions and the output. The instructions are simple. They are straight forward. They are clear. PubSub should be commended for having a simple platform, that is clean, and provides simple instructions.

    The problem we run into is if those instructions are "so simple," why the multiple messages indicating there is "nothing there," even though other searches, with the same search terms, provide a valid output?

    If I am the only one of this planet who is able to achieve the "there is a problem," then it seems reasonable to ask:

  • How many others have gotten the same message and didn't realize it?

  • How many people got a "we have no response"-type message and thought it was a normal delay?

  • How many people got a "we have nothing message" and think the problem lies with their aggregator?

    At this juncture, given I have received this many "we have nothing messages," and it is going on 96 hours since we started, I ask the question: Is it possible that this has been going on since day 1, but nobody has done a 100% test on all procedures until now?

    Again, if there is data showing that someone has done 100% testing on all steps, and there is no way to duplicate these results, I invite PubSub to disclose that information, provide a clear statement, "We have done 100% testing on all steps, and there is no way to recreate this error."

    At that point, once that public certification is made, and it is done publicly, clearly, and equivocally, then the world will be in a position to ask: "Why do we keep getting this error message despite the 100% certification of task accomplishment?"

    There is something which PubSub has missed. There is something on their platform that is not consistently producing output. When they created their clear instructions, they asked the public to do something, but the coders know to do something else, or the coders are actually doing something slightly different than their instructions imply.

    I encourage PubSub to conduct that test publicly. To show that the test platform works. Then create the needed baseline data to justify confidence.

    At this point, I have no data from PubSub. What I do have is 96 hours of nothing, no clear explanation, and management apparently suggesting that "there is no delay," without any explanation why I have nothing on these search strings.

    Who do I believe? My results, or management statements.

    I have to go with what I have, not with what management is saying. If you have another view on this, you're free to blog and look into the matter. Run your own test. DO the steps. See if you, when carefully following their instructions, arrive at an output and result.

    Fine! But if you want to talk about "there being no problem," but provide no basis to believe that assertion, and give me something that will say, "This is what you need to do," or "these instructions are bad" or "don't do this," I have no thing else to go on.

    Again, human factors integration is an issue if, this many years into the PubSub deployment, I show up, follow the instructions, and arrive at a conclusion that is new, different, unexpected.

    I continue to get the same results while following PubSub's instructions. Same messages. Same output.

    If I'm the only one getting this message, then I challenge PubSub to show that their human factors testing actually did all possible steps in the instructions; and that they have with 100% confidence demonstrated that all instructions, when followed to the letter, have generated a favorable result.

    AT this point, I have my doubts. I suspect that someone has reported a test was done, but did not actually do it. I also suspect that someone has stated they completed the integration testing without looking at their instructions and "doing it like an end-user does."

    This does not reflect well on how the human factors engineering was both designed, tested, and resolved.

    - Did Human factors get addressed in the planning?

    - What method was used to demonstrate that user-end issues were incorporated into the design?

    - What method was use to reconcile the trace between code, requirements, instructions, and error messages?

    - What sampling did management do of end-users to identify "whether the product and information provide" actually achieved the desired result: From the perspective of the end-user, and then go back and look at the goal in re requirements and platform objectives?

    - What tests did PubSub management do of their outputs, responses, and human-interface issues did management take to ensure that their product as deployed is actually meeting the intended XML-objectives of providing easy access to content?

    - How were the test results incorporated into updates, responses, and problem resolution within the existing platform?

    Problem resolution

    Let's turn to the final step of this deployment process. Understandably after many months of trial runs, outputs, and favorable messages, suddenly someone shows up and generates a "new message."

    Yet, "this new message" is what PubSub generates. Following their procedures. I also documented the error, test mode, and messages with unfavorable results. I have yet to hear a credible explanation of "What went wrong" or "how this was fixed" or "what we have corrected."

    in short, in my view, a valid report was issued, published, made public, yet the unfavorable news was not timely resolved.

    Again, the "we have nothing" message continues after the initial message, report, and opportunity to resolve the issue. If there's a solution, let's hear it; if there is no way to duplicate what the PubSub instructions say, then so state:

    "Mud's Tests: We are unable to duplicate your error mode. There is something we do not understand with our own system. Please help us."
    Yet, I have gotten the opposite: "There is no problem" and "there is no delay." Congratulations. PubSub is generating a "we have nothing message report" and wants the world to believe "there is no delay." It's been 96 hours. I have nothing. That is a delay.

    All I have are the error messages. All I have is what PubSub is giving me.

    - Is there any evidence that responses and concerns raised were ignored, or that feedback was discouraged at critical times and milestones?

    - what impact did an SEC-imposed IPO-related "quiet period" have on public discussions of publicly available error modes; did the quiet period adversely affect management's ability to timely respond to inquiries about the platform and timely respond to questions about delays?

    Summary Review Questions

    - Review the human factors testing. Ask management to demonstrate that all failure modes have been tested with negative results. Look for documents that demonstrate the test requrirements match both the system requirements and the advertised claims on the response time.

    Review the test plan. Ensure that the test approach matches the instructions. Identify those areas within the test plan that do not match the instructions.

    Look for documents outlining a clear trace between the requirements, plan, and test results. Look for detailed comments on how deviations were reconciled. Inquire if there are gaps or missing gaps in the steps.

    Review the subcontract effort. Identify those subcontractors that created sub-elements; and the basis for them to assert 100% compliance; and how the change log was incorporated into revisions. Look for award fees paid to the subcontractors; identify any financial incentives paid, what performance standard was reported to have been achieved, and look for documentation that shows there was data and evidence substantiating these award fees paid.

    Notice whether the accounts send funds to bonafide contractors that can be independently identified, and have a track record of performance in similar efforts. Notice whether the progress and results compare favorably or unfavorably with the existing effort on PubSub.

    Review integration. Identify how it was shown the various elements were brought together; and that each element consistent did what all the other elements do. Ask to see the detailed code logs which each team would have to develop; and ask the team to reconcile how the various parts were brought together in a manner whereby each element was consistently certified.

    Look at the %-sampling of each section; was it 100% review, or were there only samples? Look for documentation that shows there is a plan in place to incorporate requirements; ensure coding efforts meet those requirements; test plans verify those requirements have been met; and all failure modes have been identified, resolved, and workarounds incorporated.

    Review code certification process. Identify the method to certify that all the test objectives, results, and failure modes adequately matched what was originally identified as a requirement.

    Look for a summary table that is signed and certifies, "There is no problem with delays." If management doesn't want to provide a timely response to this document, inquire why they publicly state one performance level, but are unable or unwilling to provide documentation that certifies this standard has been met, demonstrated, and continues to be met on an ongoing basis.

    Implications

    Whether PubSub goes IPO is irrelevant. A major news corporation like CBS just lost their chief reporter, Dan Rather.

    Bloggers can find things. It remains to be seen whether John Stewart is asked to come in to become the new PubSub CEO. He calls it like it is. He's the one that can see through the non-sense.

    At this point, I have waited 96 hours, have no explanation why this is happening, and continue to hear repeated assertions that "there is no delay."

    If I was watching TV, I might imagine that I'm watching Crossfire. That show got cancelled. And there is similarly no requirement that a similar dispute cannot be similarly resolved.

    The world is eagerly awaiting PubSub's resolution to this issue. Again, what PubSub has or has not done remains to be understood. If you want something answered, direct your attention to PubSub to explain their system, and make them produce the data that backs up what they are saying: "There is no delay."

    Show me. Prove to me "the results I got" do not exist. Prove to me that PubSub has a product that creates something that works, reliably, and without outside support.

    At this juncture, it is clear the instructions say one thing, but what everyone else is doing is something else.

    There is a failure mode which PubSub does not understand. And it has been this many years into deployment and this is news. It remains to be understood how many people have also gotten the same failure mode, or have been thinking there is "no problem" when they actually have a delay.

    In my case, I have no delay. I have "nothing."

    Going forward, if we are to believe that PubSub will form the calibration for FeedMesh, or that FeedMesh is "on track", then there is much to be desired. In this situation, we have a single platform producing inconsistent results. These challenges have been met with "There is no problem" and does not bode well for the prospects of integration testing with FeedMesh.

    Let us hope the developer-community as a whole can come to an understanding of what appears to not be working so that these lessons learned can be applied to FeedMesh.

    Summary

    PubSub management continues to assert "there is no delay." Fine. Show me. Prove it. Show me aggregate date that backs up this claim.

    If there is no data to "back up the claim that there is no delay," then say so. Say, "We think there is no delay, but have no data to back up our system."

    I will not accept the, "The end user made an error." PubSub's instructions are clear. Their steps are simple. Their clear design is stellar.

    There is one small problem. It does not execute reliably, and the public when following these clear instructions are not able to consistently generate a reliable result.

    There are delays. PubSub doesn't understand why. It appears management wants to come up with all sorts of excuses.

    Find someone else. The President of the United States lied about WMD because "it was the right thing." I'm not calling Bob Wyman a liar. But I am calling him someone who is asserting something that there is no data to support:

    - There is no data to justify the conclusion, "There is no problem."

    - There is no data to justify believing, "There are no delays."

    - There is no basis to believe that the platform has no problems.

    How do I know this?

    This is what I have done: Run the test, got the messages, and the response is not consistent with what I am being told "is supposed to happen."

    Congratulations. This x-many years into deployment, we have new information and it appears as though management is sticking by their guns. They continue to assert something that I am not able to duplicate. I keep getting the "we have nothing" status messages.

    That is a delay. Not simply in getting the information, but in how management is resolving this issue. More of the "it's not us" and "it's somewhere else."

    I don't believe it. In my opinion, management has accepted the lower-level assertions of someone without checking the results. And this late in the game, someone has said, "We did all that," but they really didn't.

    That's important for directors, Wall Street, and outside investors to evaluate. It appears as though PubSub is saying "There is no problem," but there is, they don't know why, and don't want to/are unable to deal with the issue.

    It would be a shame if a fine company like PubSub ended up having problems when issues of integrity, financial reporting, and regulatory requirements simply because they said, "There is no problem" when the independent data showed otherwise.

    Notice how they resolve the issue. Once they go IPO, it's only going to get worse: The delays. The creative excuses. And the handwaving.

    PubSub has a problem with their platform. I have been getting error messages for 96 hours. I am not getting outputs. I am getting delays. Excuses.

    I am not impressed. And I question those who sing praises for something that, from my perspective, doesn't work.

    The Morgan Stanley report was instructive. I'm glad I reviewed the details myself. Page 11. Exhibits 10 and 11. That's what this is all about. If you can't reliably deliver content using simple instructions, then what confidence should advertisers have to use XML as an advertising platform?

    At this point, my eyeballs, if they could look at the XML-related ads, cannot see those ads. PubSub doesn't know why.

    I don't care why. But the advertisers who spend alot of money paying for advertising would ask, "What is the issue" and "why all these messages?" I have no answer. PubSub has provided no data, and I'm being told, "there is no problem."

    In my opinion, advertisers and investment bankers might want to hold onto their dollars until this issue is understood and timely resolved.

    If you wish to proceed, you've been warned. Buyer beware!
    " />